00:00:00.001 Started by upstream project "autotest-nightly" build number 4131 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3493 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.021 The recommended git tool is: git 00:00:00.021 using credential 00000000-0000-0000-0000-000000000002 00:00:00.025 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.042 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.066 Using shallow fetch with depth 1 00:00:00.066 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.066 > git --version # timeout=10 00:00:00.103 > git --version # 'git version 2.39.2' 00:00:00.104 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.158 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.158 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.274 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.284 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.297 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:03.297 > git config core.sparsecheckout # timeout=10 00:00:03.308 > git read-tree -mu HEAD # timeout=10 00:00:03.325 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:03.346 Commit message: "kid: add issue 3541" 00:00:03.346 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:03.566 [Pipeline] Start of Pipeline 00:00:03.579 [Pipeline] library 00:00:03.580 Loading library shm_lib@master 00:00:03.580 Library shm_lib@master is cached. Copying from home. 00:00:03.596 [Pipeline] node 00:00:03.607 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.608 [Pipeline] { 00:00:03.618 [Pipeline] catchError 00:00:03.619 [Pipeline] { 00:00:03.631 [Pipeline] wrap 00:00:03.639 [Pipeline] { 00:00:03.646 [Pipeline] stage 00:00:03.647 [Pipeline] { (Prologue) 00:00:03.663 [Pipeline] echo 00:00:03.664 Node: VM-host-WFP7 00:00:03.669 [Pipeline] cleanWs 00:00:03.680 [WS-CLEANUP] Deleting project workspace... 00:00:03.680 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.686 [WS-CLEANUP] done 00:00:03.862 [Pipeline] setCustomBuildProperty 00:00:03.953 [Pipeline] httpRequest 00:00:04.294 [Pipeline] echo 00:00:04.295 Sorcerer 10.211.164.101 is alive 00:00:04.304 [Pipeline] retry 00:00:04.305 [Pipeline] { 00:00:04.318 [Pipeline] httpRequest 00:00:04.323 HttpMethod: GET 00:00:04.323 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.324 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.324 Response Code: HTTP/1.1 200 OK 00:00:04.325 Success: Status code 200 is in the accepted range: 200,404 00:00:04.325 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.470 [Pipeline] } 00:00:04.486 [Pipeline] // retry 00:00:04.494 [Pipeline] sh 00:00:04.776 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:04.791 [Pipeline] httpRequest 00:00:05.560 [Pipeline] echo 00:00:05.561 Sorcerer 10.211.164.101 is alive 00:00:05.571 [Pipeline] retry 00:00:05.572 [Pipeline] { 00:00:05.586 [Pipeline] httpRequest 00:00:05.590 HttpMethod: GET 00:00:05.591 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:05.592 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:05.592 Response Code: HTTP/1.1 200 OK 00:00:05.593 Success: Status code 200 is in the accepted range: 200,404 00:00:05.593 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:22.774 [Pipeline] } 00:00:22.796 [Pipeline] // retry 00:00:22.806 [Pipeline] sh 00:00:23.096 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:25.652 [Pipeline] sh 00:00:25.939 + git -C spdk log --oneline -n5 00:00:25.939 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:00:25.939 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:00:25.939 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:00:25.939 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:00:25.939 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:00:25.960 [Pipeline] writeFile 00:00:25.975 [Pipeline] sh 00:00:26.268 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:26.281 [Pipeline] sh 00:00:26.567 + cat autorun-spdk.conf 00:00:26.567 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.567 SPDK_RUN_ASAN=1 00:00:26.567 SPDK_RUN_UBSAN=1 00:00:26.567 SPDK_TEST_RAID=1 00:00:26.567 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.575 RUN_NIGHTLY=1 00:00:26.577 [Pipeline] } 00:00:26.592 [Pipeline] // stage 00:00:26.606 [Pipeline] stage 00:00:26.609 [Pipeline] { (Run VM) 00:00:26.622 [Pipeline] sh 00:00:26.907 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:26.907 + echo 'Start stage prepare_nvme.sh' 00:00:26.907 Start stage prepare_nvme.sh 00:00:26.907 + [[ -n 5 ]] 00:00:26.907 + disk_prefix=ex5 00:00:26.907 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:26.907 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:26.907 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:26.907 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:26.907 ++ SPDK_RUN_ASAN=1 00:00:26.907 ++ SPDK_RUN_UBSAN=1 00:00:26.907 ++ SPDK_TEST_RAID=1 00:00:26.907 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:26.907 ++ RUN_NIGHTLY=1 00:00:26.907 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:26.907 + nvme_files=() 00:00:26.907 + declare -A nvme_files 00:00:26.907 + backend_dir=/var/lib/libvirt/images/backends 00:00:26.907 + nvme_files['nvme.img']=5G 00:00:26.907 + nvme_files['nvme-cmb.img']=5G 00:00:26.907 + nvme_files['nvme-multi0.img']=4G 00:00:26.907 + nvme_files['nvme-multi1.img']=4G 00:00:26.907 + nvme_files['nvme-multi2.img']=4G 00:00:26.907 + nvme_files['nvme-openstack.img']=8G 00:00:26.907 + nvme_files['nvme-zns.img']=5G 00:00:26.907 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:26.907 + (( SPDK_TEST_FTL == 1 )) 00:00:26.907 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:26.907 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:26.907 + for nvme in "${!nvme_files[@]}" 00:00:26.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:26.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.907 + for nvme in "${!nvme_files[@]}" 00:00:26.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:26.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.907 + for nvme in "${!nvme_files[@]}" 00:00:26.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:26.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:26.907 + for nvme in "${!nvme_files[@]}" 00:00:26.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:26.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:26.907 + for nvme in "${!nvme_files[@]}" 00:00:26.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:26.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.907 + for nvme in "${!nvme_files[@]}" 00:00:26.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:26.907 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:26.907 + for nvme in "${!nvme_files[@]}" 00:00:26.907 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:27.167 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:27.167 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:27.167 + echo 'End stage prepare_nvme.sh' 00:00:27.167 End stage prepare_nvme.sh 00:00:27.178 [Pipeline] sh 00:00:27.462 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:27.462 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:00:27.462 00:00:27.462 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:27.462 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:27.462 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:27.462 HELP=0 00:00:27.462 DRY_RUN=0 00:00:27.462 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:27.462 NVME_DISKS_TYPE=nvme,nvme, 00:00:27.462 NVME_AUTO_CREATE=0 00:00:27.462 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:27.462 NVME_CMB=,, 00:00:27.462 NVME_PMR=,, 00:00:27.462 NVME_ZNS=,, 00:00:27.462 NVME_MS=,, 00:00:27.462 NVME_FDP=,, 00:00:27.462 SPDK_VAGRANT_DISTRO=fedora39 00:00:27.462 SPDK_VAGRANT_VMCPU=10 00:00:27.462 SPDK_VAGRANT_VMRAM=12288 00:00:27.462 SPDK_VAGRANT_PROVIDER=libvirt 00:00:27.462 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:27.462 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:27.462 SPDK_OPENSTACK_NETWORK=0 00:00:27.462 VAGRANT_PACKAGE_BOX=0 00:00:27.462 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:27.462 FORCE_DISTRO=true 00:00:27.462 VAGRANT_BOX_VERSION= 00:00:27.462 EXTRA_VAGRANTFILES= 00:00:27.462 NIC_MODEL=virtio 00:00:27.462 00:00:27.462 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:27.462 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:29.371 Bringing machine 'default' up with 'libvirt' provider... 00:00:29.631 ==> default: Creating image (snapshot of base box volume). 00:00:29.891 ==> default: Creating domain with the following settings... 00:00:29.891 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727645508_3cb518d2a2f7ac637363 00:00:29.891 ==> default: -- Domain type: kvm 00:00:29.891 ==> default: -- Cpus: 10 00:00:29.891 ==> default: -- Feature: acpi 00:00:29.891 ==> default: -- Feature: apic 00:00:29.891 ==> default: -- Feature: pae 00:00:29.891 ==> default: -- Memory: 12288M 00:00:29.891 ==> default: -- Memory Backing: hugepages: 00:00:29.891 ==> default: -- Management MAC: 00:00:29.891 ==> default: -- Loader: 00:00:29.891 ==> default: -- Nvram: 00:00:29.891 ==> default: -- Base box: spdk/fedora39 00:00:29.891 ==> default: -- Storage pool: default 00:00:29.891 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727645508_3cb518d2a2f7ac637363.img (20G) 00:00:29.891 ==> default: -- Volume Cache: default 00:00:29.891 ==> default: -- Kernel: 00:00:29.891 ==> default: -- Initrd: 00:00:29.891 ==> default: -- Graphics Type: vnc 00:00:29.891 ==> default: -- Graphics Port: -1 00:00:29.891 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.891 ==> default: -- Graphics Password: Not defined 00:00:29.891 ==> default: -- Video Type: cirrus 00:00:29.891 ==> default: -- Video VRAM: 9216 00:00:29.891 ==> default: -- Sound Type: 00:00:29.891 ==> default: -- Keymap: en-us 00:00:29.891 ==> default: -- TPM Path: 00:00:29.891 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.891 ==> default: -- Command line args: 00:00:29.891 ==> default: -> value=-device, 00:00:29.891 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.891 ==> default: -> value=-drive, 00:00:29.891 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.891 ==> default: -> value=-device, 00:00:29.892 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.892 ==> default: -> value=-device, 00:00:29.892 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:29.892 ==> default: -> value=-drive, 00:00:29.892 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:29.892 ==> default: -> value=-device, 00:00:29.892 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.892 ==> default: -> value=-drive, 00:00:29.892 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:29.892 ==> default: -> value=-device, 00:00:29.892 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.892 ==> default: -> value=-drive, 00:00:29.892 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:29.892 ==> default: -> value=-device, 00:00:29.892 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.892 ==> default: Creating shared folders metadata... 00:00:29.892 ==> default: Starting domain. 00:00:31.805 ==> default: Waiting for domain to get an IP address... 00:00:49.917 ==> default: Waiting for SSH to become available... 00:00:49.917 ==> default: Configuring and enabling network interfaces... 00:00:55.209 default: SSH address: 192.168.121.87:22 00:00:55.210 default: SSH username: vagrant 00:00:55.210 default: SSH auth method: private key 00:00:57.119 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:05.250 ==> default: Mounting SSHFS shared folder... 00:01:07.791 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:07.791 ==> default: Checking Mount.. 00:01:09.173 ==> default: Folder Successfully Mounted! 00:01:09.173 ==> default: Running provisioner: file... 00:01:10.552 default: ~/.gitconfig => .gitconfig 00:01:11.120 00:01:11.120 SUCCESS! 00:01:11.120 00:01:11.120 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:11.120 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:11.120 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:11.120 00:01:11.129 [Pipeline] } 00:01:11.146 [Pipeline] // stage 00:01:11.155 [Pipeline] dir 00:01:11.156 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:11.158 [Pipeline] { 00:01:11.171 [Pipeline] catchError 00:01:11.173 [Pipeline] { 00:01:11.187 [Pipeline] sh 00:01:11.469 + vagrant ssh-config --host vagrant 00:01:11.469 + sed -ne /^Host/,$p 00:01:11.469 + tee ssh_conf 00:01:14.109 Host vagrant 00:01:14.109 HostName 192.168.121.87 00:01:14.109 User vagrant 00:01:14.109 Port 22 00:01:14.109 UserKnownHostsFile /dev/null 00:01:14.109 StrictHostKeyChecking no 00:01:14.109 PasswordAuthentication no 00:01:14.109 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:14.109 IdentitiesOnly yes 00:01:14.109 LogLevel FATAL 00:01:14.109 ForwardAgent yes 00:01:14.109 ForwardX11 yes 00:01:14.109 00:01:14.122 [Pipeline] withEnv 00:01:14.123 [Pipeline] { 00:01:14.133 [Pipeline] sh 00:01:14.413 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:14.413 source /etc/os-release 00:01:14.413 [[ -e /image.version ]] && img=$(< /image.version) 00:01:14.413 # Minimal, systemd-like check. 00:01:14.413 if [[ -e /.dockerenv ]]; then 00:01:14.413 # Clear garbage from the node's name: 00:01:14.413 # agt-er_autotest_547-896 -> autotest_547-896 00:01:14.413 # $HOSTNAME is the actual container id 00:01:14.413 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:14.413 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:14.413 # We can assume this is a mount from a host where container is running, 00:01:14.413 # so fetch its hostname to easily identify the target swarm worker. 00:01:14.413 container="$(< /etc/hostname) ($agent)" 00:01:14.413 else 00:01:14.413 # Fallback 00:01:14.413 container=$agent 00:01:14.413 fi 00:01:14.413 fi 00:01:14.413 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:14.413 00:01:14.685 [Pipeline] } 00:01:14.699 [Pipeline] // withEnv 00:01:14.706 [Pipeline] setCustomBuildProperty 00:01:14.719 [Pipeline] stage 00:01:14.721 [Pipeline] { (Tests) 00:01:14.737 [Pipeline] sh 00:01:15.019 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:15.293 [Pipeline] sh 00:01:15.577 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:15.854 [Pipeline] timeout 00:01:15.855 Timeout set to expire in 1 hr 30 min 00:01:15.856 [Pipeline] { 00:01:15.872 [Pipeline] sh 00:01:16.155 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:16.726 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:16.738 [Pipeline] sh 00:01:17.021 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:17.296 [Pipeline] sh 00:01:17.580 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:17.858 [Pipeline] sh 00:01:18.141 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:18.402 ++ readlink -f spdk_repo 00:01:18.402 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:18.402 + [[ -n /home/vagrant/spdk_repo ]] 00:01:18.402 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:18.402 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:18.402 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:18.402 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:18.402 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:18.402 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:18.402 + cd /home/vagrant/spdk_repo 00:01:18.402 + source /etc/os-release 00:01:18.402 ++ NAME='Fedora Linux' 00:01:18.402 ++ VERSION='39 (Cloud Edition)' 00:01:18.402 ++ ID=fedora 00:01:18.402 ++ VERSION_ID=39 00:01:18.402 ++ VERSION_CODENAME= 00:01:18.402 ++ PLATFORM_ID=platform:f39 00:01:18.402 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:18.402 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:18.402 ++ LOGO=fedora-logo-icon 00:01:18.402 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:18.402 ++ HOME_URL=https://fedoraproject.org/ 00:01:18.402 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:18.402 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:18.402 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:18.402 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:18.402 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:18.402 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:18.402 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:18.402 ++ SUPPORT_END=2024-11-12 00:01:18.402 ++ VARIANT='Cloud Edition' 00:01:18.402 ++ VARIANT_ID=cloud 00:01:18.402 + uname -a 00:01:18.402 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:18.402 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:18.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:18.972 Hugepages 00:01:18.972 node hugesize free / total 00:01:18.972 node0 1048576kB 0 / 0 00:01:18.972 node0 2048kB 0 / 0 00:01:18.972 00:01:18.972 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.972 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:18.972 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:18.972 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:18.972 + rm -f /tmp/spdk-ld-path 00:01:18.972 + source autorun-spdk.conf 00:01:18.972 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.972 ++ SPDK_RUN_ASAN=1 00:01:18.972 ++ SPDK_RUN_UBSAN=1 00:01:18.972 ++ SPDK_TEST_RAID=1 00:01:18.972 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.972 ++ RUN_NIGHTLY=1 00:01:18.972 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.972 + [[ -n '' ]] 00:01:18.972 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:18.972 + for M in /var/spdk/build-*-manifest.txt 00:01:18.972 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:18.972 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.233 + for M in /var/spdk/build-*-manifest.txt 00:01:19.233 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:19.233 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.233 + for M in /var/spdk/build-*-manifest.txt 00:01:19.233 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:19.233 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:19.233 ++ uname 00:01:19.233 + [[ Linux == \L\i\n\u\x ]] 00:01:19.233 + sudo dmesg -T 00:01:19.233 + sudo dmesg --clear 00:01:19.233 + dmesg_pid=5423 00:01:19.233 + [[ Fedora Linux == FreeBSD ]] 00:01:19.233 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.233 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:19.233 + sudo dmesg -Tw 00:01:19.233 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:19.233 + [[ -x /usr/src/fio-static/fio ]] 00:01:19.233 + export FIO_BIN=/usr/src/fio-static/fio 00:01:19.233 + FIO_BIN=/usr/src/fio-static/fio 00:01:19.233 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:19.233 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:19.233 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:19.233 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.233 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:19.233 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:19.233 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.233 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:19.233 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:19.233 Test configuration: 00:01:19.233 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.233 SPDK_RUN_ASAN=1 00:01:19.233 SPDK_RUN_UBSAN=1 00:01:19.233 SPDK_TEST_RAID=1 00:01:19.233 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:19.233 RUN_NIGHTLY=1 21:32:38 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:19.233 21:32:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:19.233 21:32:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:19.233 21:32:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:19.233 21:32:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:19.233 21:32:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:19.233 21:32:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.233 21:32:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.233 21:32:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.233 21:32:38 -- paths/export.sh@5 -- $ export PATH 00:01:19.233 21:32:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:19.233 21:32:38 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:19.233 21:32:38 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:19.494 21:32:38 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727645558.XXXXXX 00:01:19.494 21:32:38 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727645558.rQKjtF 00:01:19.494 21:32:38 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:19.494 21:32:38 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:01:19.494 21:32:38 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:19.494 21:32:38 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:19.494 21:32:38 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:19.494 21:32:38 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:19.494 21:32:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:19.494 21:32:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.494 21:32:38 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:19.494 21:32:38 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:19.494 21:32:38 -- pm/common@17 -- $ local monitor 00:01:19.494 21:32:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.494 21:32:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:19.494 21:32:38 -- pm/common@25 -- $ sleep 1 00:01:19.494 21:32:38 -- pm/common@21 -- $ date +%s 00:01:19.494 21:32:38 -- pm/common@21 -- $ date +%s 00:01:19.494 21:32:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727645558 00:01:19.494 21:32:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727645558 00:01:19.494 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727645558_collect-vmstat.pm.log 00:01:19.494 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727645558_collect-cpu-load.pm.log 00:01:20.433 21:32:39 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:20.433 21:32:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.433 21:32:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.433 21:32:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:20.433 21:32:39 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.433 Sun Sep 29 09:32:39 PM UTC 2024 00:01:20.433 21:32:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.433 v25.01-pre-17-g09cc66129 00:01:20.433 21:32:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:20.433 21:32:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:20.433 21:32:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:20.433 21:32:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:20.433 21:32:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.433 ************************************ 00:01:20.433 START TEST asan 00:01:20.433 ************************************ 00:01:20.433 using asan 00:01:20.433 21:32:39 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:20.433 00:01:20.433 real 0m0.001s 00:01:20.433 user 0m0.000s 00:01:20.433 sys 0m0.000s 00:01:20.433 21:32:39 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:20.433 21:32:39 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.433 ************************************ 00:01:20.433 END TEST asan 00:01:20.433 ************************************ 00:01:20.433 21:32:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.433 21:32:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.433 21:32:39 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:20.433 21:32:39 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:20.433 21:32:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.433 ************************************ 00:01:20.433 START TEST ubsan 00:01:20.433 ************************************ 00:01:20.433 using ubsan 00:01:20.433 21:32:39 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:20.433 00:01:20.433 real 0m0.000s 00:01:20.433 user 0m0.000s 00:01:20.433 sys 0m0.000s 00:01:20.433 21:32:39 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:20.433 21:32:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.433 ************************************ 00:01:20.433 END TEST ubsan 00:01:20.433 ************************************ 00:01:20.693 21:32:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.693 21:32:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.693 21:32:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.693 21:32:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.693 21:32:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.693 21:32:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.693 21:32:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.693 21:32:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.693 21:32:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:20.693 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:20.693 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:21.263 Using 'verbs' RDMA provider 00:01:40.308 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:55.222 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:55.222 Creating mk/config.mk...done. 00:01:55.222 Creating mk/cc.flags.mk...done. 00:01:55.222 Type 'make' to build. 00:01:55.222 21:33:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:55.222 21:33:13 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:55.222 21:33:13 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:55.222 21:33:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.222 ************************************ 00:01:55.222 START TEST make 00:01:55.222 ************************************ 00:01:55.222 21:33:13 make -- common/autotest_common.sh@1125 -- $ make -j10 00:01:55.222 make[1]: Nothing to be done for 'all'. 00:02:05.213 The Meson build system 00:02:05.213 Version: 1.5.0 00:02:05.213 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:05.213 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:05.213 Build type: native build 00:02:05.213 Program cat found: YES (/usr/bin/cat) 00:02:05.213 Project name: DPDK 00:02:05.213 Project version: 24.03.0 00:02:05.213 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.213 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.213 Host machine cpu family: x86_64 00:02:05.213 Host machine cpu: x86_64 00:02:05.213 Message: ## Building in Developer Mode ## 00:02:05.213 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.213 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.213 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.213 Program python3 found: YES (/usr/bin/python3) 00:02:05.213 Program cat found: YES (/usr/bin/cat) 00:02:05.213 Compiler for C supports arguments -march=native: YES 00:02:05.213 Checking for size of "void *" : 8 00:02:05.213 Checking for size of "void *" : 8 (cached) 00:02:05.213 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:05.213 Library m found: YES 00:02:05.213 Library numa found: YES 00:02:05.213 Has header "numaif.h" : YES 00:02:05.213 Library fdt found: NO 00:02:05.213 Library execinfo found: NO 00:02:05.213 Has header "execinfo.h" : YES 00:02:05.213 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.213 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.213 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.213 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.213 Run-time dependency openssl found: YES 3.1.1 00:02:05.213 Run-time dependency libpcap found: YES 1.10.4 00:02:05.213 Has header "pcap.h" with dependency libpcap: YES 00:02:05.213 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.213 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.213 Compiler for C supports arguments -Wformat: YES 00:02:05.213 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.213 Compiler for C supports arguments -Wformat-security: NO 00:02:05.213 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.213 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.213 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.213 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.213 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.213 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.213 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.213 Compiler for C supports arguments -Wundef: YES 00:02:05.213 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.213 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.213 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.213 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.213 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.213 Program objdump found: YES (/usr/bin/objdump) 00:02:05.213 Compiler for C supports arguments -mavx512f: YES 00:02:05.213 Checking if "AVX512 checking" compiles: YES 00:02:05.213 Fetching value of define "__SSE4_2__" : 1 00:02:05.213 Fetching value of define "__AES__" : 1 00:02:05.213 Fetching value of define "__AVX__" : 1 00:02:05.213 Fetching value of define "__AVX2__" : 1 00:02:05.213 Fetching value of define "__AVX512BW__" : 1 00:02:05.213 Fetching value of define "__AVX512CD__" : 1 00:02:05.213 Fetching value of define "__AVX512DQ__" : 1 00:02:05.213 Fetching value of define "__AVX512F__" : 1 00:02:05.213 Fetching value of define "__AVX512VL__" : 1 00:02:05.213 Fetching value of define "__PCLMUL__" : 1 00:02:05.213 Fetching value of define "__RDRND__" : 1 00:02:05.213 Fetching value of define "__RDSEED__" : 1 00:02:05.213 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.213 Fetching value of define "__znver1__" : (undefined) 00:02:05.213 Fetching value of define "__znver2__" : (undefined) 00:02:05.213 Fetching value of define "__znver3__" : (undefined) 00:02:05.213 Fetching value of define "__znver4__" : (undefined) 00:02:05.213 Library asan found: YES 00:02:05.213 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.213 Message: lib/log: Defining dependency "log" 00:02:05.213 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.213 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.213 Library rt found: YES 00:02:05.213 Checking for function "getentropy" : NO 00:02:05.213 Message: lib/eal: Defining dependency "eal" 00:02:05.213 Message: lib/ring: Defining dependency "ring" 00:02:05.213 Message: lib/rcu: Defining dependency "rcu" 00:02:05.213 Message: lib/mempool: Defining dependency "mempool" 00:02:05.213 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.213 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.213 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.213 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.213 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.213 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.213 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:05.213 Compiler for C supports arguments -mpclmul: YES 00:02:05.213 Compiler for C supports arguments -maes: YES 00:02:05.213 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.213 Compiler for C supports arguments -mavx512bw: YES 00:02:05.213 Compiler for C supports arguments -mavx512dq: YES 00:02:05.213 Compiler for C supports arguments -mavx512vl: YES 00:02:05.213 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.213 Compiler for C supports arguments -mavx2: YES 00:02:05.213 Compiler for C supports arguments -mavx: YES 00:02:05.213 Message: lib/net: Defining dependency "net" 00:02:05.213 Message: lib/meter: Defining dependency "meter" 00:02:05.213 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.213 Message: lib/pci: Defining dependency "pci" 00:02:05.213 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.213 Message: lib/hash: Defining dependency "hash" 00:02:05.213 Message: lib/timer: Defining dependency "timer" 00:02:05.213 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.213 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.213 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.213 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.213 Message: lib/power: Defining dependency "power" 00:02:05.213 Message: lib/reorder: Defining dependency "reorder" 00:02:05.213 Message: lib/security: Defining dependency "security" 00:02:05.213 Has header "linux/userfaultfd.h" : YES 00:02:05.213 Has header "linux/vduse.h" : YES 00:02:05.213 Message: lib/vhost: Defining dependency "vhost" 00:02:05.213 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.213 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.213 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.213 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.213 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.213 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.213 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.213 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.213 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.213 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.213 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:05.213 Configuring doxy-api-html.conf using configuration 00:02:05.213 Configuring doxy-api-man.conf using configuration 00:02:05.213 Program mandb found: YES (/usr/bin/mandb) 00:02:05.213 Program sphinx-build found: NO 00:02:05.213 Configuring rte_build_config.h using configuration 00:02:05.213 Message: 00:02:05.213 ================= 00:02:05.213 Applications Enabled 00:02:05.213 ================= 00:02:05.213 00:02:05.213 apps: 00:02:05.213 00:02:05.213 00:02:05.213 Message: 00:02:05.213 ================= 00:02:05.213 Libraries Enabled 00:02:05.213 ================= 00:02:05.213 00:02:05.213 libs: 00:02:05.213 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.213 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.213 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.213 00:02:05.213 Message: 00:02:05.213 =============== 00:02:05.213 Drivers Enabled 00:02:05.213 =============== 00:02:05.213 00:02:05.213 common: 00:02:05.213 00:02:05.213 bus: 00:02:05.213 pci, vdev, 00:02:05.213 mempool: 00:02:05.213 ring, 00:02:05.213 dma: 00:02:05.213 00:02:05.213 net: 00:02:05.213 00:02:05.213 crypto: 00:02:05.213 00:02:05.213 compress: 00:02:05.213 00:02:05.213 vdpa: 00:02:05.213 00:02:05.213 00:02:05.213 Message: 00:02:05.213 ================= 00:02:05.213 Content Skipped 00:02:05.213 ================= 00:02:05.213 00:02:05.213 apps: 00:02:05.213 dumpcap: explicitly disabled via build config 00:02:05.213 graph: explicitly disabled via build config 00:02:05.213 pdump: explicitly disabled via build config 00:02:05.213 proc-info: explicitly disabled via build config 00:02:05.213 test-acl: explicitly disabled via build config 00:02:05.213 test-bbdev: explicitly disabled via build config 00:02:05.213 test-cmdline: explicitly disabled via build config 00:02:05.213 test-compress-perf: explicitly disabled via build config 00:02:05.213 test-crypto-perf: explicitly disabled via build config 00:02:05.213 test-dma-perf: explicitly disabled via build config 00:02:05.213 test-eventdev: explicitly disabled via build config 00:02:05.213 test-fib: explicitly disabled via build config 00:02:05.213 test-flow-perf: explicitly disabled via build config 00:02:05.213 test-gpudev: explicitly disabled via build config 00:02:05.213 test-mldev: explicitly disabled via build config 00:02:05.213 test-pipeline: explicitly disabled via build config 00:02:05.214 test-pmd: explicitly disabled via build config 00:02:05.214 test-regex: explicitly disabled via build config 00:02:05.214 test-sad: explicitly disabled via build config 00:02:05.214 test-security-perf: explicitly disabled via build config 00:02:05.214 00:02:05.214 libs: 00:02:05.214 argparse: explicitly disabled via build config 00:02:05.214 metrics: explicitly disabled via build config 00:02:05.214 acl: explicitly disabled via build config 00:02:05.214 bbdev: explicitly disabled via build config 00:02:05.214 bitratestats: explicitly disabled via build config 00:02:05.214 bpf: explicitly disabled via build config 00:02:05.214 cfgfile: explicitly disabled via build config 00:02:05.214 distributor: explicitly disabled via build config 00:02:05.214 efd: explicitly disabled via build config 00:02:05.214 eventdev: explicitly disabled via build config 00:02:05.214 dispatcher: explicitly disabled via build config 00:02:05.214 gpudev: explicitly disabled via build config 00:02:05.214 gro: explicitly disabled via build config 00:02:05.214 gso: explicitly disabled via build config 00:02:05.214 ip_frag: explicitly disabled via build config 00:02:05.214 jobstats: explicitly disabled via build config 00:02:05.214 latencystats: explicitly disabled via build config 00:02:05.214 lpm: explicitly disabled via build config 00:02:05.214 member: explicitly disabled via build config 00:02:05.214 pcapng: explicitly disabled via build config 00:02:05.214 rawdev: explicitly disabled via build config 00:02:05.214 regexdev: explicitly disabled via build config 00:02:05.214 mldev: explicitly disabled via build config 00:02:05.214 rib: explicitly disabled via build config 00:02:05.214 sched: explicitly disabled via build config 00:02:05.214 stack: explicitly disabled via build config 00:02:05.214 ipsec: explicitly disabled via build config 00:02:05.214 pdcp: explicitly disabled via build config 00:02:05.214 fib: explicitly disabled via build config 00:02:05.214 port: explicitly disabled via build config 00:02:05.214 pdump: explicitly disabled via build config 00:02:05.214 table: explicitly disabled via build config 00:02:05.214 pipeline: explicitly disabled via build config 00:02:05.214 graph: explicitly disabled via build config 00:02:05.214 node: explicitly disabled via build config 00:02:05.214 00:02:05.214 drivers: 00:02:05.214 common/cpt: not in enabled drivers build config 00:02:05.214 common/dpaax: not in enabled drivers build config 00:02:05.214 common/iavf: not in enabled drivers build config 00:02:05.214 common/idpf: not in enabled drivers build config 00:02:05.214 common/ionic: not in enabled drivers build config 00:02:05.214 common/mvep: not in enabled drivers build config 00:02:05.214 common/octeontx: not in enabled drivers build config 00:02:05.214 bus/auxiliary: not in enabled drivers build config 00:02:05.214 bus/cdx: not in enabled drivers build config 00:02:05.214 bus/dpaa: not in enabled drivers build config 00:02:05.214 bus/fslmc: not in enabled drivers build config 00:02:05.214 bus/ifpga: not in enabled drivers build config 00:02:05.214 bus/platform: not in enabled drivers build config 00:02:05.214 bus/uacce: not in enabled drivers build config 00:02:05.214 bus/vmbus: not in enabled drivers build config 00:02:05.214 common/cnxk: not in enabled drivers build config 00:02:05.214 common/mlx5: not in enabled drivers build config 00:02:05.214 common/nfp: not in enabled drivers build config 00:02:05.214 common/nitrox: not in enabled drivers build config 00:02:05.214 common/qat: not in enabled drivers build config 00:02:05.214 common/sfc_efx: not in enabled drivers build config 00:02:05.214 mempool/bucket: not in enabled drivers build config 00:02:05.214 mempool/cnxk: not in enabled drivers build config 00:02:05.214 mempool/dpaa: not in enabled drivers build config 00:02:05.214 mempool/dpaa2: not in enabled drivers build config 00:02:05.214 mempool/octeontx: not in enabled drivers build config 00:02:05.214 mempool/stack: not in enabled drivers build config 00:02:05.214 dma/cnxk: not in enabled drivers build config 00:02:05.214 dma/dpaa: not in enabled drivers build config 00:02:05.214 dma/dpaa2: not in enabled drivers build config 00:02:05.214 dma/hisilicon: not in enabled drivers build config 00:02:05.214 dma/idxd: not in enabled drivers build config 00:02:05.214 dma/ioat: not in enabled drivers build config 00:02:05.214 dma/skeleton: not in enabled drivers build config 00:02:05.214 net/af_packet: not in enabled drivers build config 00:02:05.214 net/af_xdp: not in enabled drivers build config 00:02:05.214 net/ark: not in enabled drivers build config 00:02:05.214 net/atlantic: not in enabled drivers build config 00:02:05.214 net/avp: not in enabled drivers build config 00:02:05.214 net/axgbe: not in enabled drivers build config 00:02:05.214 net/bnx2x: not in enabled drivers build config 00:02:05.214 net/bnxt: not in enabled drivers build config 00:02:05.214 net/bonding: not in enabled drivers build config 00:02:05.214 net/cnxk: not in enabled drivers build config 00:02:05.214 net/cpfl: not in enabled drivers build config 00:02:05.214 net/cxgbe: not in enabled drivers build config 00:02:05.214 net/dpaa: not in enabled drivers build config 00:02:05.214 net/dpaa2: not in enabled drivers build config 00:02:05.214 net/e1000: not in enabled drivers build config 00:02:05.214 net/ena: not in enabled drivers build config 00:02:05.214 net/enetc: not in enabled drivers build config 00:02:05.214 net/enetfec: not in enabled drivers build config 00:02:05.214 net/enic: not in enabled drivers build config 00:02:05.214 net/failsafe: not in enabled drivers build config 00:02:05.214 net/fm10k: not in enabled drivers build config 00:02:05.214 net/gve: not in enabled drivers build config 00:02:05.214 net/hinic: not in enabled drivers build config 00:02:05.214 net/hns3: not in enabled drivers build config 00:02:05.214 net/i40e: not in enabled drivers build config 00:02:05.214 net/iavf: not in enabled drivers build config 00:02:05.214 net/ice: not in enabled drivers build config 00:02:05.214 net/idpf: not in enabled drivers build config 00:02:05.214 net/igc: not in enabled drivers build config 00:02:05.214 net/ionic: not in enabled drivers build config 00:02:05.214 net/ipn3ke: not in enabled drivers build config 00:02:05.214 net/ixgbe: not in enabled drivers build config 00:02:05.214 net/mana: not in enabled drivers build config 00:02:05.214 net/memif: not in enabled drivers build config 00:02:05.214 net/mlx4: not in enabled drivers build config 00:02:05.214 net/mlx5: not in enabled drivers build config 00:02:05.214 net/mvneta: not in enabled drivers build config 00:02:05.214 net/mvpp2: not in enabled drivers build config 00:02:05.214 net/netvsc: not in enabled drivers build config 00:02:05.214 net/nfb: not in enabled drivers build config 00:02:05.214 net/nfp: not in enabled drivers build config 00:02:05.214 net/ngbe: not in enabled drivers build config 00:02:05.214 net/null: not in enabled drivers build config 00:02:05.214 net/octeontx: not in enabled drivers build config 00:02:05.214 net/octeon_ep: not in enabled drivers build config 00:02:05.214 net/pcap: not in enabled drivers build config 00:02:05.214 net/pfe: not in enabled drivers build config 00:02:05.214 net/qede: not in enabled drivers build config 00:02:05.214 net/ring: not in enabled drivers build config 00:02:05.214 net/sfc: not in enabled drivers build config 00:02:05.214 net/softnic: not in enabled drivers build config 00:02:05.214 net/tap: not in enabled drivers build config 00:02:05.214 net/thunderx: not in enabled drivers build config 00:02:05.214 net/txgbe: not in enabled drivers build config 00:02:05.214 net/vdev_netvsc: not in enabled drivers build config 00:02:05.214 net/vhost: not in enabled drivers build config 00:02:05.214 net/virtio: not in enabled drivers build config 00:02:05.214 net/vmxnet3: not in enabled drivers build config 00:02:05.214 raw/*: missing internal dependency, "rawdev" 00:02:05.214 crypto/armv8: not in enabled drivers build config 00:02:05.214 crypto/bcmfs: not in enabled drivers build config 00:02:05.214 crypto/caam_jr: not in enabled drivers build config 00:02:05.214 crypto/ccp: not in enabled drivers build config 00:02:05.214 crypto/cnxk: not in enabled drivers build config 00:02:05.214 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.214 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.214 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.214 crypto/mlx5: not in enabled drivers build config 00:02:05.214 crypto/mvsam: not in enabled drivers build config 00:02:05.214 crypto/nitrox: not in enabled drivers build config 00:02:05.214 crypto/null: not in enabled drivers build config 00:02:05.214 crypto/octeontx: not in enabled drivers build config 00:02:05.214 crypto/openssl: not in enabled drivers build config 00:02:05.214 crypto/scheduler: not in enabled drivers build config 00:02:05.214 crypto/uadk: not in enabled drivers build config 00:02:05.214 crypto/virtio: not in enabled drivers build config 00:02:05.214 compress/isal: not in enabled drivers build config 00:02:05.214 compress/mlx5: not in enabled drivers build config 00:02:05.214 compress/nitrox: not in enabled drivers build config 00:02:05.214 compress/octeontx: not in enabled drivers build config 00:02:05.214 compress/zlib: not in enabled drivers build config 00:02:05.214 regex/*: missing internal dependency, "regexdev" 00:02:05.214 ml/*: missing internal dependency, "mldev" 00:02:05.214 vdpa/ifc: not in enabled drivers build config 00:02:05.214 vdpa/mlx5: not in enabled drivers build config 00:02:05.214 vdpa/nfp: not in enabled drivers build config 00:02:05.214 vdpa/sfc: not in enabled drivers build config 00:02:05.214 event/*: missing internal dependency, "eventdev" 00:02:05.214 baseband/*: missing internal dependency, "bbdev" 00:02:05.214 gpu/*: missing internal dependency, "gpudev" 00:02:05.214 00:02:05.214 00:02:05.214 Build targets in project: 85 00:02:05.214 00:02:05.214 DPDK 24.03.0 00:02:05.214 00:02:05.214 User defined options 00:02:05.214 buildtype : debug 00:02:05.214 default_library : shared 00:02:05.214 libdir : lib 00:02:05.214 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:05.214 b_sanitize : address 00:02:05.214 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:05.214 c_link_args : 00:02:05.214 cpu_instruction_set: native 00:02:05.214 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:05.214 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:05.214 enable_docs : false 00:02:05.214 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:05.214 enable_kmods : false 00:02:05.214 max_lcores : 128 00:02:05.215 tests : false 00:02:05.215 00:02:05.215 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.215 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:05.215 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.215 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.215 [3/268] Linking static target lib/librte_kvargs.a 00:02:05.215 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.215 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.215 [6/268] Linking static target lib/librte_log.a 00:02:05.215 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.215 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.215 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.215 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.474 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.474 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.474 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.474 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.474 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.474 [16/268] Linking static target lib/librte_telemetry.a 00:02:05.474 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.474 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.734 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.734 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.993 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.993 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.993 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.993 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.993 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.993 [26/268] Linking target lib/librte_log.so.24.1 00:02:05.993 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:06.253 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:06.253 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:06.253 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:06.253 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:06.253 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.253 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:06.253 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:06.253 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:06.253 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:06.513 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:06.513 [38/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:06.513 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.513 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:06.513 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.513 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.513 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.513 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:06.773 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:06.773 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.773 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:07.034 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:07.034 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:07.034 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:07.034 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:07.034 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:07.034 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:07.034 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:07.294 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:07.294 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:07.294 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:07.294 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:07.553 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.553 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:07.554 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:07.554 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.554 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:07.554 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:07.554 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.814 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.814 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:07.814 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.074 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:08.074 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.074 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.074 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.334 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.334 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.334 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:08.334 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.334 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.334 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.334 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:08.594 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:08.594 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:08.594 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.594 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.854 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:08.854 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:08.854 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:08.854 [87/268] Linking static target lib/librte_eal.a 00:02:08.854 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.854 [89/268] Linking static target lib/librte_ring.a 00:02:08.854 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:08.854 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:09.113 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:09.114 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.114 [94/268] Linking static target lib/librte_rcu.a 00:02:09.114 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.114 [96/268] Linking static target lib/librte_mempool.a 00:02:09.374 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:09.374 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:09.374 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:09.374 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:09.374 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.374 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:09.374 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:09.634 [104/268] Linking static target lib/librte_mbuf.a 00:02:09.634 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:09.634 [106/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.634 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:09.634 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:09.634 [109/268] Linking static target lib/librte_net.a 00:02:09.634 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:09.634 [111/268] Linking static target lib/librte_meter.a 00:02:09.894 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.894 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:10.153 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.153 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:10.153 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.153 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:10.153 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.413 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:10.413 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:10.413 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.673 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:10.933 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:10.933 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:10.933 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:10.933 [126/268] Linking static target lib/librte_pci.a 00:02:10.933 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:11.193 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:11.193 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:11.193 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:11.193 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:11.193 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:11.193 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:11.193 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:11.193 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:11.193 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:11.453 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.453 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:11.453 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:11.453 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:11.453 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:11.453 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:11.453 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:11.453 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:11.453 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:11.453 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:11.453 [147/268] Linking static target lib/librte_cmdline.a 00:02:11.713 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.972 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:11.972 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:11.972 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:11.972 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:11.973 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:11.973 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:11.973 [155/268] Linking static target lib/librte_timer.a 00:02:12.232 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:12.232 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:12.492 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:12.492 [159/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:12.492 [160/268] Linking static target lib/librte_compressdev.a 00:02:12.752 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.752 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:12.752 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.752 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:12.752 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:12.752 [166/268] Linking static target lib/librte_dmadev.a 00:02:12.752 [167/268] Linking static target lib/librte_ethdev.a 00:02:12.752 [168/268] Linking static target lib/librte_hash.a 00:02:12.752 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:12.752 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.012 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:13.012 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.012 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.271 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.271 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.271 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:13.271 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.531 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:13.531 [179/268] Linking static target lib/librte_cryptodev.a 00:02:13.531 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.531 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.531 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.531 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:13.791 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.791 [185/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.791 [186/268] Linking static target lib/librte_power.a 00:02:14.051 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:14.051 [188/268] Linking static target lib/librte_reorder.a 00:02:14.051 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:14.051 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.051 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:14.051 [192/268] Linking static target lib/librte_security.a 00:02:14.051 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.310 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.569 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.829 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.829 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.829 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:14.829 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.088 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.088 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:15.347 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.347 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.347 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.347 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.607 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:15.607 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.607 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:15.607 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:15.607 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:15.607 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:15.867 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:15.867 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:15.867 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.867 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:15.867 [216/268] Linking static target drivers/librte_bus_vdev.a 00:02:15.867 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:15.867 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:16.127 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.127 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.127 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:16.127 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.127 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.127 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.127 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:16.127 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.387 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.326 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:19.233 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.233 [230/268] Linking target lib/librte_eal.so.24.1 00:02:19.233 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:19.233 [232/268] Linking target lib/librte_pci.so.24.1 00:02:19.233 [233/268] Linking target lib/librte_ring.so.24.1 00:02:19.233 [234/268] Linking target lib/librte_meter.so.24.1 00:02:19.233 [235/268] Linking target lib/librte_timer.so.24.1 00:02:19.233 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:19.233 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:19.492 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:19.492 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:19.492 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:19.492 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:19.492 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:19.492 [243/268] Linking target lib/librte_mempool.so.24.1 00:02:19.492 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:19.492 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:19.492 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:19.493 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:19.752 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:19.752 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:19.752 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:19.752 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:19.752 [252/268] Linking target lib/librte_net.so.24.1 00:02:19.752 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:19.752 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:20.011 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:20.011 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:20.011 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:20.011 [258/268] Linking target lib/librte_security.so.24.1 00:02:20.011 [259/268] Linking target lib/librte_hash.so.24.1 00:02:20.271 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:20.840 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:20.840 [262/268] Linking static target lib/librte_vhost.a 00:02:21.099 [263/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.099 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:21.359 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:21.359 [266/268] Linking target lib/librte_power.so.24.1 00:02:23.275 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.275 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:23.536 INFO: autodetecting backend as ninja 00:02:23.536 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:41.645 CC lib/ut_mock/mock.o 00:02:41.645 CC lib/log/log_deprecated.o 00:02:41.645 CC lib/log/log_flags.o 00:02:41.645 CC lib/log/log.o 00:02:41.645 CC lib/ut/ut.o 00:02:41.646 LIB libspdk_ut.a 00:02:41.646 LIB libspdk_log.a 00:02:41.646 LIB libspdk_ut_mock.a 00:02:41.646 SO libspdk_ut.so.2.0 00:02:41.646 SO libspdk_log.so.7.0 00:02:41.646 SO libspdk_ut_mock.so.6.0 00:02:41.646 SYMLINK libspdk_ut.so 00:02:41.646 SYMLINK libspdk_log.so 00:02:41.646 SYMLINK libspdk_ut_mock.so 00:02:41.646 CC lib/dma/dma.o 00:02:41.646 CC lib/util/base64.o 00:02:41.646 CC lib/util/bit_array.o 00:02:41.646 CC lib/util/cpuset.o 00:02:41.646 CC lib/util/crc16.o 00:02:41.646 CC lib/util/crc32c.o 00:02:41.646 CC lib/util/crc32.o 00:02:41.646 CXX lib/trace_parser/trace.o 00:02:41.646 CC lib/ioat/ioat.o 00:02:41.646 CC lib/vfio_user/host/vfio_user_pci.o 00:02:41.646 CC lib/util/crc32_ieee.o 00:02:41.646 CC lib/vfio_user/host/vfio_user.o 00:02:41.646 CC lib/util/crc64.o 00:02:41.646 CC lib/util/dif.o 00:02:41.646 LIB libspdk_dma.a 00:02:41.646 CC lib/util/fd.o 00:02:41.646 SO libspdk_dma.so.5.0 00:02:41.646 CC lib/util/fd_group.o 00:02:41.646 CC lib/util/file.o 00:02:41.646 CC lib/util/hexlify.o 00:02:41.646 SYMLINK libspdk_dma.so 00:02:41.646 CC lib/util/iov.o 00:02:41.646 LIB libspdk_ioat.a 00:02:41.646 CC lib/util/math.o 00:02:41.646 CC lib/util/net.o 00:02:41.646 LIB libspdk_vfio_user.a 00:02:41.646 SO libspdk_ioat.so.7.0 00:02:41.646 SO libspdk_vfio_user.so.5.0 00:02:41.646 CC lib/util/pipe.o 00:02:41.646 SYMLINK libspdk_ioat.so 00:02:41.646 CC lib/util/strerror_tls.o 00:02:41.646 CC lib/util/string.o 00:02:41.905 CC lib/util/uuid.o 00:02:41.905 SYMLINK libspdk_vfio_user.so 00:02:41.905 CC lib/util/xor.o 00:02:41.905 CC lib/util/zipf.o 00:02:41.905 CC lib/util/md5.o 00:02:42.164 LIB libspdk_util.a 00:02:42.164 SO libspdk_util.so.10.0 00:02:42.424 LIB libspdk_trace_parser.a 00:02:42.424 SYMLINK libspdk_util.so 00:02:42.424 SO libspdk_trace_parser.so.6.0 00:02:42.424 SYMLINK libspdk_trace_parser.so 00:02:42.424 CC lib/rdma_utils/rdma_utils.o 00:02:42.424 CC lib/vmd/vmd.o 00:02:42.424 CC lib/vmd/led.o 00:02:42.424 CC lib/env_dpdk/env.o 00:02:42.424 CC lib/json/json_parse.o 00:02:42.424 CC lib/env_dpdk/memory.o 00:02:42.424 CC lib/json/json_util.o 00:02:42.683 CC lib/idxd/idxd.o 00:02:42.683 CC lib/conf/conf.o 00:02:42.683 CC lib/rdma_provider/common.o 00:02:42.683 CC lib/env_dpdk/pci.o 00:02:42.683 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:42.683 CC lib/json/json_write.o 00:02:42.683 CC lib/idxd/idxd_user.o 00:02:42.683 LIB libspdk_rdma_utils.a 00:02:42.683 LIB libspdk_conf.a 00:02:42.942 SO libspdk_rdma_utils.so.1.0 00:02:42.942 SO libspdk_conf.so.6.0 00:02:42.942 SYMLINK libspdk_conf.so 00:02:42.942 SYMLINK libspdk_rdma_utils.so 00:02:42.942 CC lib/env_dpdk/init.o 00:02:42.942 CC lib/env_dpdk/threads.o 00:02:42.942 LIB libspdk_rdma_provider.a 00:02:42.942 SO libspdk_rdma_provider.so.6.0 00:02:42.942 SYMLINK libspdk_rdma_provider.so 00:02:42.942 CC lib/env_dpdk/pci_ioat.o 00:02:42.942 CC lib/idxd/idxd_kernel.o 00:02:42.942 CC lib/env_dpdk/pci_virtio.o 00:02:43.201 LIB libspdk_json.a 00:02:43.201 CC lib/env_dpdk/pci_vmd.o 00:02:43.201 SO libspdk_json.so.6.0 00:02:43.201 CC lib/env_dpdk/pci_idxd.o 00:02:43.201 CC lib/env_dpdk/pci_event.o 00:02:43.201 CC lib/env_dpdk/sigbus_handler.o 00:02:43.201 SYMLINK libspdk_json.so 00:02:43.201 CC lib/env_dpdk/pci_dpdk.o 00:02:43.201 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.201 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.201 LIB libspdk_idxd.a 00:02:43.201 SO libspdk_idxd.so.12.1 00:02:43.201 LIB libspdk_vmd.a 00:02:43.201 SYMLINK libspdk_idxd.so 00:02:43.461 SO libspdk_vmd.so.6.0 00:02:43.461 SYMLINK libspdk_vmd.so 00:02:43.461 CC lib/jsonrpc/jsonrpc_client.o 00:02:43.461 CC lib/jsonrpc/jsonrpc_server.o 00:02:43.461 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:43.461 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:43.721 LIB libspdk_jsonrpc.a 00:02:43.721 SO libspdk_jsonrpc.so.6.0 00:02:43.980 SYMLINK libspdk_jsonrpc.so 00:02:44.240 LIB libspdk_env_dpdk.a 00:02:44.240 SO libspdk_env_dpdk.so.15.0 00:02:44.240 CC lib/rpc/rpc.o 00:02:44.499 SYMLINK libspdk_env_dpdk.so 00:02:44.499 LIB libspdk_rpc.a 00:02:44.499 SO libspdk_rpc.so.6.0 00:02:44.760 SYMLINK libspdk_rpc.so 00:02:45.020 CC lib/notify/notify.o 00:02:45.020 CC lib/notify/notify_rpc.o 00:02:45.020 CC lib/trace/trace.o 00:02:45.020 CC lib/trace/trace_flags.o 00:02:45.020 CC lib/trace/trace_rpc.o 00:02:45.020 CC lib/keyring/keyring.o 00:02:45.020 CC lib/keyring/keyring_rpc.o 00:02:45.280 LIB libspdk_notify.a 00:02:45.280 SO libspdk_notify.so.6.0 00:02:45.280 LIB libspdk_keyring.a 00:02:45.280 LIB libspdk_trace.a 00:02:45.280 SYMLINK libspdk_notify.so 00:02:45.280 SO libspdk_keyring.so.2.0 00:02:45.280 SO libspdk_trace.so.11.0 00:02:45.280 SYMLINK libspdk_keyring.so 00:02:45.540 SYMLINK libspdk_trace.so 00:02:45.800 CC lib/thread/thread.o 00:02:45.800 CC lib/sock/sock.o 00:02:45.800 CC lib/thread/iobuf.o 00:02:45.800 CC lib/sock/sock_rpc.o 00:02:46.370 LIB libspdk_sock.a 00:02:46.370 SO libspdk_sock.so.10.0 00:02:46.370 SYMLINK libspdk_sock.so 00:02:46.940 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:46.940 CC lib/nvme/nvme_ctrlr.o 00:02:46.940 CC lib/nvme/nvme_fabric.o 00:02:46.940 CC lib/nvme/nvme_ns_cmd.o 00:02:46.940 CC lib/nvme/nvme_ns.o 00:02:46.940 CC lib/nvme/nvme_pcie_common.o 00:02:46.940 CC lib/nvme/nvme_qpair.o 00:02:46.940 CC lib/nvme/nvme_pcie.o 00:02:46.940 CC lib/nvme/nvme.o 00:02:47.509 CC lib/nvme/nvme_quirks.o 00:02:47.509 LIB libspdk_thread.a 00:02:47.509 CC lib/nvme/nvme_transport.o 00:02:47.509 SO libspdk_thread.so.10.1 00:02:47.509 CC lib/nvme/nvme_discovery.o 00:02:47.509 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.509 SYMLINK libspdk_thread.so 00:02:47.509 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.768 CC lib/nvme/nvme_tcp.o 00:02:47.768 CC lib/nvme/nvme_opal.o 00:02:47.768 CC lib/accel/accel.o 00:02:48.027 CC lib/nvme/nvme_io_msg.o 00:02:48.027 CC lib/nvme/nvme_poll_group.o 00:02:48.027 CC lib/nvme/nvme_zns.o 00:02:48.027 CC lib/nvme/nvme_stubs.o 00:02:48.027 CC lib/nvme/nvme_auth.o 00:02:48.286 CC lib/nvme/nvme_cuse.o 00:02:48.286 CC lib/nvme/nvme_rdma.o 00:02:48.545 CC lib/accel/accel_rpc.o 00:02:48.545 CC lib/accel/accel_sw.o 00:02:48.804 CC lib/blob/blobstore.o 00:02:48.804 CC lib/init/json_config.o 00:02:48.804 CC lib/virtio/virtio.o 00:02:49.064 CC lib/virtio/virtio_vhost_user.o 00:02:49.064 CC lib/init/subsystem.o 00:02:49.064 LIB libspdk_accel.a 00:02:49.064 CC lib/init/subsystem_rpc.o 00:02:49.064 CC lib/blob/request.o 00:02:49.064 SO libspdk_accel.so.16.0 00:02:49.064 CC lib/blob/zeroes.o 00:02:49.064 CC lib/blob/blob_bs_dev.o 00:02:49.324 CC lib/init/rpc.o 00:02:49.324 SYMLINK libspdk_accel.so 00:02:49.324 CC lib/virtio/virtio_vfio_user.o 00:02:49.324 CC lib/virtio/virtio_pci.o 00:02:49.324 LIB libspdk_init.a 00:02:49.324 CC lib/bdev/bdev.o 00:02:49.324 CC lib/fsdev/fsdev.o 00:02:49.324 SO libspdk_init.so.6.0 00:02:49.324 CC lib/bdev/bdev_rpc.o 00:02:49.324 CC lib/bdev/bdev_zone.o 00:02:49.584 SYMLINK libspdk_init.so 00:02:49.584 CC lib/fsdev/fsdev_io.o 00:02:49.584 CC lib/fsdev/fsdev_rpc.o 00:02:49.584 CC lib/bdev/part.o 00:02:49.584 CC lib/bdev/scsi_nvme.o 00:02:49.584 LIB libspdk_virtio.a 00:02:49.584 SO libspdk_virtio.so.7.0 00:02:49.584 CC lib/event/app.o 00:02:49.844 CC lib/event/reactor.o 00:02:49.844 SYMLINK libspdk_virtio.so 00:02:49.844 CC lib/event/log_rpc.o 00:02:49.844 LIB libspdk_nvme.a 00:02:49.844 CC lib/event/app_rpc.o 00:02:49.844 CC lib/event/scheduler_static.o 00:02:50.104 SO libspdk_nvme.so.14.0 00:02:50.104 LIB libspdk_fsdev.a 00:02:50.104 SO libspdk_fsdev.so.1.0 00:02:50.364 SYMLINK libspdk_fsdev.so 00:02:50.364 LIB libspdk_event.a 00:02:50.364 SYMLINK libspdk_nvme.so 00:02:50.364 SO libspdk_event.so.14.0 00:02:50.364 SYMLINK libspdk_event.so 00:02:50.624 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:51.194 LIB libspdk_fuse_dispatcher.a 00:02:51.454 SO libspdk_fuse_dispatcher.so.1.0 00:02:51.454 SYMLINK libspdk_fuse_dispatcher.so 00:02:52.394 LIB libspdk_blob.a 00:02:52.394 LIB libspdk_bdev.a 00:02:52.394 SO libspdk_blob.so.11.0 00:02:52.394 SO libspdk_bdev.so.16.0 00:02:52.394 SYMLINK libspdk_blob.so 00:02:52.654 SYMLINK libspdk_bdev.so 00:02:52.654 CC lib/lvol/lvol.o 00:02:52.654 CC lib/blobfs/tree.o 00:02:52.654 CC lib/blobfs/blobfs.o 00:02:52.654 CC lib/nvmf/ctrlr.o 00:02:52.654 CC lib/nvmf/ctrlr_discovery.o 00:02:52.654 CC lib/nvmf/ctrlr_bdev.o 00:02:52.654 CC lib/scsi/dev.o 00:02:52.654 CC lib/ublk/ublk.o 00:02:52.914 CC lib/nbd/nbd.o 00:02:52.914 CC lib/ftl/ftl_core.o 00:02:52.914 CC lib/ublk/ublk_rpc.o 00:02:52.914 CC lib/scsi/lun.o 00:02:53.198 CC lib/nvmf/subsystem.o 00:02:53.198 CC lib/ftl/ftl_init.o 00:02:53.501 CC lib/nbd/nbd_rpc.o 00:02:53.501 CC lib/scsi/port.o 00:02:53.501 CC lib/nvmf/nvmf.o 00:02:53.501 CC lib/scsi/scsi.o 00:02:53.501 LIB libspdk_nbd.a 00:02:53.501 CC lib/ftl/ftl_layout.o 00:02:53.501 SO libspdk_nbd.so.7.0 00:02:53.501 LIB libspdk_ublk.a 00:02:53.501 SO libspdk_ublk.so.3.0 00:02:53.501 SYMLINK libspdk_nbd.so 00:02:53.768 CC lib/nvmf/nvmf_rpc.o 00:02:53.768 CC lib/nvmf/transport.o 00:02:53.768 CC lib/scsi/scsi_bdev.o 00:02:53.768 SYMLINK libspdk_ublk.so 00:02:53.768 CC lib/ftl/ftl_debug.o 00:02:53.768 LIB libspdk_blobfs.a 00:02:53.768 SO libspdk_blobfs.so.10.0 00:02:53.768 CC lib/ftl/ftl_io.o 00:02:53.768 SYMLINK libspdk_blobfs.so 00:02:53.768 CC lib/ftl/ftl_sb.o 00:02:54.028 CC lib/ftl/ftl_l2p.o 00:02:54.028 LIB libspdk_lvol.a 00:02:54.028 SO libspdk_lvol.so.10.0 00:02:54.028 SYMLINK libspdk_lvol.so 00:02:54.028 CC lib/ftl/ftl_l2p_flat.o 00:02:54.028 CC lib/ftl/ftl_nv_cache.o 00:02:54.028 CC lib/ftl/ftl_band.o 00:02:54.288 CC lib/ftl/ftl_band_ops.o 00:02:54.288 CC lib/scsi/scsi_pr.o 00:02:54.288 CC lib/ftl/ftl_writer.o 00:02:54.288 CC lib/nvmf/tcp.o 00:02:54.547 CC lib/nvmf/stubs.o 00:02:54.547 CC lib/ftl/ftl_rq.o 00:02:54.547 CC lib/ftl/ftl_reloc.o 00:02:54.547 CC lib/nvmf/mdns_server.o 00:02:54.547 CC lib/scsi/scsi_rpc.o 00:02:54.547 CC lib/ftl/ftl_l2p_cache.o 00:02:54.547 CC lib/nvmf/rdma.o 00:02:54.547 CC lib/scsi/task.o 00:02:54.547 CC lib/nvmf/auth.o 00:02:54.807 LIB libspdk_scsi.a 00:02:54.807 CC lib/ftl/ftl_p2l.o 00:02:54.807 CC lib/ftl/ftl_p2l_log.o 00:02:54.807 CC lib/ftl/mngt/ftl_mngt.o 00:02:54.807 SO libspdk_scsi.so.9.0 00:02:55.073 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:55.073 SYMLINK libspdk_scsi.so 00:02:55.073 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:55.073 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:55.073 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:55.073 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:55.332 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:55.332 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:55.332 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:55.332 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:55.332 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:55.591 CC lib/iscsi/conn.o 00:02:55.591 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:55.591 CC lib/iscsi/init_grp.o 00:02:55.591 CC lib/iscsi/iscsi.o 00:02:55.591 CC lib/vhost/vhost.o 00:02:55.591 CC lib/iscsi/param.o 00:02:55.592 CC lib/iscsi/portal_grp.o 00:02:55.592 CC lib/iscsi/tgt_node.o 00:02:55.851 CC lib/iscsi/iscsi_subsystem.o 00:02:55.851 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:55.851 CC lib/ftl/utils/ftl_conf.o 00:02:55.851 CC lib/ftl/utils/ftl_md.o 00:02:56.111 CC lib/vhost/vhost_rpc.o 00:02:56.111 CC lib/iscsi/iscsi_rpc.o 00:02:56.111 CC lib/ftl/utils/ftl_mempool.o 00:02:56.111 CC lib/iscsi/task.o 00:02:56.111 CC lib/ftl/utils/ftl_bitmap.o 00:02:56.111 CC lib/vhost/vhost_scsi.o 00:02:56.371 CC lib/ftl/utils/ftl_property.o 00:02:56.371 CC lib/vhost/vhost_blk.o 00:02:56.371 CC lib/vhost/rte_vhost_user.o 00:02:56.371 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:56.371 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:56.371 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:56.631 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:56.631 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:56.631 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:56.631 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:56.631 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:56.631 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:56.891 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:56.891 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:56.891 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:56.891 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:56.891 CC lib/ftl/base/ftl_base_dev.o 00:02:57.151 CC lib/ftl/base/ftl_base_bdev.o 00:02:57.151 CC lib/ftl/ftl_trace.o 00:02:57.151 LIB libspdk_iscsi.a 00:02:57.151 SO libspdk_iscsi.so.8.0 00:02:57.151 LIB libspdk_nvmf.a 00:02:57.411 LIB libspdk_ftl.a 00:02:57.411 SO libspdk_nvmf.so.19.0 00:02:57.411 SYMLINK libspdk_iscsi.so 00:02:57.411 LIB libspdk_vhost.a 00:02:57.669 SO libspdk_vhost.so.8.0 00:02:57.669 SO libspdk_ftl.so.9.0 00:02:57.669 SYMLINK libspdk_nvmf.so 00:02:57.669 SYMLINK libspdk_vhost.so 00:02:57.928 SYMLINK libspdk_ftl.so 00:02:58.188 CC module/env_dpdk/env_dpdk_rpc.o 00:02:58.188 CC module/scheduler/gscheduler/gscheduler.o 00:02:58.188 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:58.188 CC module/blob/bdev/blob_bdev.o 00:02:58.188 CC module/keyring/file/keyring.o 00:02:58.188 CC module/fsdev/aio/fsdev_aio.o 00:02:58.188 CC module/accel/ioat/accel_ioat.o 00:02:58.188 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:58.188 CC module/sock/posix/posix.o 00:02:58.188 CC module/accel/error/accel_error.o 00:02:58.447 LIB libspdk_env_dpdk_rpc.a 00:02:58.448 SO libspdk_env_dpdk_rpc.so.6.0 00:02:58.448 CC module/keyring/file/keyring_rpc.o 00:02:58.448 SYMLINK libspdk_env_dpdk_rpc.so 00:02:58.448 LIB libspdk_scheduler_gscheduler.a 00:02:58.448 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:58.448 LIB libspdk_scheduler_dpdk_governor.a 00:02:58.448 SO libspdk_scheduler_gscheduler.so.4.0 00:02:58.448 CC module/accel/ioat/accel_ioat_rpc.o 00:02:58.448 LIB libspdk_scheduler_dynamic.a 00:02:58.448 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:58.448 CC module/accel/error/accel_error_rpc.o 00:02:58.448 SO libspdk_scheduler_dynamic.so.4.0 00:02:58.448 SYMLINK libspdk_scheduler_gscheduler.so 00:02:58.448 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:58.448 SYMLINK libspdk_scheduler_dynamic.so 00:02:58.448 LIB libspdk_blob_bdev.a 00:02:58.448 LIB libspdk_keyring_file.a 00:02:58.707 SO libspdk_blob_bdev.so.11.0 00:02:58.707 CC module/fsdev/aio/linux_aio_mgr.o 00:02:58.707 LIB libspdk_accel_ioat.a 00:02:58.707 SO libspdk_keyring_file.so.2.0 00:02:58.707 SO libspdk_accel_ioat.so.6.0 00:02:58.707 LIB libspdk_accel_error.a 00:02:58.707 SO libspdk_accel_error.so.2.0 00:02:58.707 SYMLINK libspdk_blob_bdev.so 00:02:58.707 SYMLINK libspdk_keyring_file.so 00:02:58.707 CC module/accel/iaa/accel_iaa.o 00:02:58.707 CC module/keyring/linux/keyring.o 00:02:58.707 SYMLINK libspdk_accel_ioat.so 00:02:58.707 CC module/accel/dsa/accel_dsa.o 00:02:58.707 CC module/accel/dsa/accel_dsa_rpc.o 00:02:58.707 SYMLINK libspdk_accel_error.so 00:02:58.707 CC module/keyring/linux/keyring_rpc.o 00:02:58.707 CC module/accel/iaa/accel_iaa_rpc.o 00:02:58.967 LIB libspdk_keyring_linux.a 00:02:58.967 SO libspdk_keyring_linux.so.1.0 00:02:58.967 CC module/bdev/delay/vbdev_delay.o 00:02:58.968 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.968 LIB libspdk_accel_iaa.a 00:02:58.968 SYMLINK libspdk_keyring_linux.so 00:02:58.968 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.968 SO libspdk_accel_iaa.so.3.0 00:02:58.968 LIB libspdk_fsdev_aio.a 00:02:58.968 CC module/bdev/error/vbdev_error.o 00:02:58.968 CC module/bdev/gpt/gpt.o 00:02:58.968 LIB libspdk_accel_dsa.a 00:02:58.968 SO libspdk_fsdev_aio.so.1.0 00:02:58.968 SYMLINK libspdk_accel_iaa.so 00:02:58.968 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.968 SO libspdk_accel_dsa.so.5.0 00:02:58.968 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.968 SYMLINK libspdk_fsdev_aio.so 00:02:59.228 LIB libspdk_sock_posix.a 00:02:59.228 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:59.228 CC module/bdev/error/vbdev_error_rpc.o 00:02:59.228 SYMLINK libspdk_accel_dsa.so 00:02:59.228 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:59.228 SO libspdk_sock_posix.so.6.0 00:02:59.228 SYMLINK libspdk_sock_posix.so 00:02:59.228 LIB libspdk_bdev_error.a 00:02:59.228 LIB libspdk_blobfs_bdev.a 00:02:59.228 LIB libspdk_bdev_delay.a 00:02:59.228 SO libspdk_blobfs_bdev.so.6.0 00:02:59.228 SO libspdk_bdev_error.so.6.0 00:02:59.228 LIB libspdk_bdev_gpt.a 00:02:59.228 SO libspdk_bdev_delay.so.6.0 00:02:59.228 CC module/bdev/malloc/bdev_malloc.o 00:02:59.228 SO libspdk_bdev_gpt.so.6.0 00:02:59.488 CC module/bdev/null/bdev_null.o 00:02:59.488 SYMLINK libspdk_blobfs_bdev.so 00:02:59.488 CC module/bdev/null/bdev_null_rpc.o 00:02:59.488 SYMLINK libspdk_bdev_error.so 00:02:59.488 SYMLINK libspdk_bdev_delay.so 00:02:59.488 SYMLINK libspdk_bdev_gpt.so 00:02:59.488 CC module/bdev/passthru/vbdev_passthru.o 00:02:59.488 CC module/bdev/nvme/bdev_nvme.o 00:02:59.488 CC module/bdev/raid/bdev_raid.o 00:02:59.488 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:59.488 CC module/bdev/split/vbdev_split.o 00:02:59.488 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:59.488 LIB libspdk_bdev_lvol.a 00:02:59.748 CC module/bdev/aio/bdev_aio.o 00:02:59.748 LIB libspdk_bdev_null.a 00:02:59.748 SO libspdk_bdev_lvol.so.6.0 00:02:59.748 SO libspdk_bdev_null.so.6.0 00:02:59.748 SYMLINK libspdk_bdev_lvol.so 00:02:59.748 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:59.748 CC module/bdev/aio/bdev_aio_rpc.o 00:02:59.748 SYMLINK libspdk_bdev_null.so 00:02:59.748 LIB libspdk_bdev_malloc.a 00:02:59.748 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:59.748 SO libspdk_bdev_malloc.so.6.0 00:02:59.748 CC module/bdev/split/vbdev_split_rpc.o 00:02:59.748 SYMLINK libspdk_bdev_malloc.so 00:02:59.748 CC module/bdev/raid/bdev_raid_rpc.o 00:03:00.009 CC module/bdev/ftl/bdev_ftl.o 00:03:00.009 LIB libspdk_bdev_passthru.a 00:03:00.009 LIB libspdk_bdev_zone_block.a 00:03:00.009 SO libspdk_bdev_passthru.so.6.0 00:03:00.009 LIB libspdk_bdev_aio.a 00:03:00.009 SO libspdk_bdev_zone_block.so.6.0 00:03:00.009 LIB libspdk_bdev_split.a 00:03:00.009 SO libspdk_bdev_aio.so.6.0 00:03:00.009 CC module/bdev/iscsi/bdev_iscsi.o 00:03:00.009 SO libspdk_bdev_split.so.6.0 00:03:00.009 SYMLINK libspdk_bdev_passthru.so 00:03:00.009 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:00.009 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:00.009 SYMLINK libspdk_bdev_zone_block.so 00:03:00.009 SYMLINK libspdk_bdev_aio.so 00:03:00.009 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:00.009 CC module/bdev/raid/bdev_raid_sb.o 00:03:00.009 SYMLINK libspdk_bdev_split.so 00:03:00.009 CC module/bdev/raid/raid0.o 00:03:00.009 CC module/bdev/nvme/nvme_rpc.o 00:03:00.269 CC module/bdev/nvme/bdev_mdns_client.o 00:03:00.269 LIB libspdk_bdev_ftl.a 00:03:00.269 SO libspdk_bdev_ftl.so.6.0 00:03:00.269 SYMLINK libspdk_bdev_ftl.so 00:03:00.269 CC module/bdev/nvme/vbdev_opal.o 00:03:00.269 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:00.269 CC module/bdev/raid/raid1.o 00:03:00.269 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:00.269 CC module/bdev/raid/concat.o 00:03:00.269 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:00.529 LIB libspdk_bdev_iscsi.a 00:03:00.529 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:00.529 SO libspdk_bdev_iscsi.so.6.0 00:03:00.529 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:00.529 CC module/bdev/raid/raid5f.o 00:03:00.529 SYMLINK libspdk_bdev_iscsi.so 00:03:00.789 LIB libspdk_bdev_virtio.a 00:03:00.789 SO libspdk_bdev_virtio.so.6.0 00:03:01.054 SYMLINK libspdk_bdev_virtio.so 00:03:01.054 LIB libspdk_bdev_raid.a 00:03:01.317 SO libspdk_bdev_raid.so.6.0 00:03:01.317 SYMLINK libspdk_bdev_raid.so 00:03:01.886 LIB libspdk_bdev_nvme.a 00:03:02.145 SO libspdk_bdev_nvme.so.7.0 00:03:02.145 SYMLINK libspdk_bdev_nvme.so 00:03:02.715 CC module/event/subsystems/sock/sock.o 00:03:02.715 CC module/event/subsystems/keyring/keyring.o 00:03:02.715 CC module/event/subsystems/vmd/vmd.o 00:03:02.715 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:02.715 CC module/event/subsystems/scheduler/scheduler.o 00:03:02.715 CC module/event/subsystems/iobuf/iobuf.o 00:03:02.715 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:02.715 CC module/event/subsystems/fsdev/fsdev.o 00:03:02.715 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:02.975 LIB libspdk_event_keyring.a 00:03:02.975 LIB libspdk_event_sock.a 00:03:02.975 LIB libspdk_event_vmd.a 00:03:02.975 LIB libspdk_event_fsdev.a 00:03:02.975 LIB libspdk_event_vhost_blk.a 00:03:02.975 LIB libspdk_event_scheduler.a 00:03:02.975 LIB libspdk_event_iobuf.a 00:03:02.975 SO libspdk_event_keyring.so.1.0 00:03:02.975 SO libspdk_event_sock.so.5.0 00:03:02.975 SO libspdk_event_vmd.so.6.0 00:03:02.975 SO libspdk_event_fsdev.so.1.0 00:03:02.975 SO libspdk_event_vhost_blk.so.3.0 00:03:02.975 SO libspdk_event_iobuf.so.3.0 00:03:02.975 SO libspdk_event_scheduler.so.4.0 00:03:02.975 SYMLINK libspdk_event_vhost_blk.so 00:03:02.975 SYMLINK libspdk_event_fsdev.so 00:03:02.975 SYMLINK libspdk_event_keyring.so 00:03:02.975 SYMLINK libspdk_event_sock.so 00:03:02.975 SYMLINK libspdk_event_vmd.so 00:03:02.975 SYMLINK libspdk_event_scheduler.so 00:03:02.975 SYMLINK libspdk_event_iobuf.so 00:03:03.546 CC module/event/subsystems/accel/accel.o 00:03:03.546 LIB libspdk_event_accel.a 00:03:03.546 SO libspdk_event_accel.so.6.0 00:03:03.546 SYMLINK libspdk_event_accel.so 00:03:04.116 CC module/event/subsystems/bdev/bdev.o 00:03:04.116 LIB libspdk_event_bdev.a 00:03:04.376 SO libspdk_event_bdev.so.6.0 00:03:04.376 SYMLINK libspdk_event_bdev.so 00:03:04.635 CC module/event/subsystems/scsi/scsi.o 00:03:04.635 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:04.635 CC module/event/subsystems/nbd/nbd.o 00:03:04.635 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:04.635 CC module/event/subsystems/ublk/ublk.o 00:03:04.895 LIB libspdk_event_scsi.a 00:03:04.895 LIB libspdk_event_nbd.a 00:03:04.895 SO libspdk_event_scsi.so.6.0 00:03:04.895 LIB libspdk_event_ublk.a 00:03:04.895 SO libspdk_event_nbd.so.6.0 00:03:04.895 SYMLINK libspdk_event_scsi.so 00:03:04.895 SO libspdk_event_ublk.so.3.0 00:03:04.895 LIB libspdk_event_nvmf.a 00:03:04.895 SYMLINK libspdk_event_nbd.so 00:03:04.895 SYMLINK libspdk_event_ublk.so 00:03:04.895 SO libspdk_event_nvmf.so.6.0 00:03:05.155 SYMLINK libspdk_event_nvmf.so 00:03:05.155 CC module/event/subsystems/iscsi/iscsi.o 00:03:05.155 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:05.415 LIB libspdk_event_iscsi.a 00:03:05.415 LIB libspdk_event_vhost_scsi.a 00:03:05.415 SO libspdk_event_vhost_scsi.so.3.0 00:03:05.415 SO libspdk_event_iscsi.so.6.0 00:03:05.415 SYMLINK libspdk_event_vhost_scsi.so 00:03:05.415 SYMLINK libspdk_event_iscsi.so 00:03:05.675 SO libspdk.so.6.0 00:03:05.675 SYMLINK libspdk.so 00:03:05.935 CXX app/trace/trace.o 00:03:05.935 CC app/spdk_lspci/spdk_lspci.o 00:03:05.935 CC app/spdk_nvme_perf/perf.o 00:03:05.935 CC app/trace_record/trace_record.o 00:03:05.935 CC app/spdk_nvme_identify/identify.o 00:03:05.935 CC app/iscsi_tgt/iscsi_tgt.o 00:03:05.935 CC app/nvmf_tgt/nvmf_main.o 00:03:05.935 CC test/thread/poller_perf/poller_perf.o 00:03:05.935 CC app/spdk_tgt/spdk_tgt.o 00:03:05.935 CC examples/util/zipf/zipf.o 00:03:06.195 LINK spdk_lspci 00:03:06.195 LINK poller_perf 00:03:06.195 LINK nvmf_tgt 00:03:06.195 LINK spdk_trace_record 00:03:06.195 LINK iscsi_tgt 00:03:06.195 LINK spdk_tgt 00:03:06.195 LINK zipf 00:03:06.454 LINK spdk_trace 00:03:06.454 CC app/spdk_nvme_discover/discovery_aer.o 00:03:06.454 CC app/spdk_top/spdk_top.o 00:03:06.454 CC app/spdk_dd/spdk_dd.o 00:03:06.454 CC test/dma/test_dma/test_dma.o 00:03:06.454 LINK spdk_nvme_discover 00:03:06.713 CC examples/ioat/perf/perf.o 00:03:06.713 CC examples/ioat/verify/verify.o 00:03:06.713 CC examples/vmd/lsvmd/lsvmd.o 00:03:06.713 CC app/fio/nvme/fio_plugin.o 00:03:06.713 LINK lsvmd 00:03:06.713 LINK spdk_nvme_perf 00:03:06.713 LINK ioat_perf 00:03:06.713 LINK verify 00:03:06.973 CC app/fio/bdev/fio_plugin.o 00:03:06.973 LINK spdk_dd 00:03:06.973 LINK spdk_nvme_identify 00:03:06.973 CC examples/vmd/led/led.o 00:03:06.973 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:06.973 LINK test_dma 00:03:07.233 CC app/vhost/vhost.o 00:03:07.233 CC examples/idxd/perf/perf.o 00:03:07.233 LINK led 00:03:07.233 LINK spdk_nvme 00:03:07.233 CC examples/thread/thread/thread_ex.o 00:03:07.233 LINK interrupt_tgt 00:03:07.233 LINK vhost 00:03:07.233 CC test/app/bdev_svc/bdev_svc.o 00:03:07.233 LINK spdk_bdev 00:03:07.493 TEST_HEADER include/spdk/accel.h 00:03:07.493 TEST_HEADER include/spdk/accel_module.h 00:03:07.493 TEST_HEADER include/spdk/assert.h 00:03:07.493 LINK spdk_top 00:03:07.493 TEST_HEADER include/spdk/barrier.h 00:03:07.493 TEST_HEADER include/spdk/base64.h 00:03:07.493 TEST_HEADER include/spdk/bdev.h 00:03:07.493 TEST_HEADER include/spdk/bdev_module.h 00:03:07.493 TEST_HEADER include/spdk/bdev_zone.h 00:03:07.493 TEST_HEADER include/spdk/bit_array.h 00:03:07.493 TEST_HEADER include/spdk/bit_pool.h 00:03:07.493 TEST_HEADER include/spdk/blob_bdev.h 00:03:07.493 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:07.493 TEST_HEADER include/spdk/blobfs.h 00:03:07.493 TEST_HEADER include/spdk/blob.h 00:03:07.493 TEST_HEADER include/spdk/conf.h 00:03:07.493 TEST_HEADER include/spdk/config.h 00:03:07.493 TEST_HEADER include/spdk/cpuset.h 00:03:07.493 TEST_HEADER include/spdk/crc16.h 00:03:07.493 TEST_HEADER include/spdk/crc32.h 00:03:07.493 TEST_HEADER include/spdk/crc64.h 00:03:07.493 TEST_HEADER include/spdk/dif.h 00:03:07.493 TEST_HEADER include/spdk/dma.h 00:03:07.493 TEST_HEADER include/spdk/endian.h 00:03:07.493 TEST_HEADER include/spdk/env_dpdk.h 00:03:07.493 TEST_HEADER include/spdk/env.h 00:03:07.493 TEST_HEADER include/spdk/event.h 00:03:07.493 TEST_HEADER include/spdk/fd_group.h 00:03:07.493 TEST_HEADER include/spdk/fd.h 00:03:07.493 TEST_HEADER include/spdk/file.h 00:03:07.493 TEST_HEADER include/spdk/fsdev.h 00:03:07.493 LINK idxd_perf 00:03:07.493 TEST_HEADER include/spdk/fsdev_module.h 00:03:07.493 TEST_HEADER include/spdk/ftl.h 00:03:07.493 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:07.493 TEST_HEADER include/spdk/gpt_spec.h 00:03:07.493 TEST_HEADER include/spdk/hexlify.h 00:03:07.493 TEST_HEADER include/spdk/histogram_data.h 00:03:07.493 TEST_HEADER include/spdk/idxd.h 00:03:07.493 TEST_HEADER include/spdk/idxd_spec.h 00:03:07.493 CC examples/sock/hello_world/hello_sock.o 00:03:07.493 TEST_HEADER include/spdk/init.h 00:03:07.493 TEST_HEADER include/spdk/ioat.h 00:03:07.493 TEST_HEADER include/spdk/ioat_spec.h 00:03:07.493 TEST_HEADER include/spdk/iscsi_spec.h 00:03:07.493 TEST_HEADER include/spdk/json.h 00:03:07.493 TEST_HEADER include/spdk/jsonrpc.h 00:03:07.493 TEST_HEADER include/spdk/keyring.h 00:03:07.493 TEST_HEADER include/spdk/keyring_module.h 00:03:07.493 TEST_HEADER include/spdk/likely.h 00:03:07.493 TEST_HEADER include/spdk/log.h 00:03:07.493 TEST_HEADER include/spdk/lvol.h 00:03:07.493 TEST_HEADER include/spdk/md5.h 00:03:07.493 TEST_HEADER include/spdk/memory.h 00:03:07.493 TEST_HEADER include/spdk/mmio.h 00:03:07.493 TEST_HEADER include/spdk/nbd.h 00:03:07.493 TEST_HEADER include/spdk/net.h 00:03:07.493 LINK bdev_svc 00:03:07.493 TEST_HEADER include/spdk/notify.h 00:03:07.493 TEST_HEADER include/spdk/nvme.h 00:03:07.493 LINK thread 00:03:07.493 TEST_HEADER include/spdk/nvme_intel.h 00:03:07.493 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:07.493 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:07.493 TEST_HEADER include/spdk/nvme_spec.h 00:03:07.493 TEST_HEADER include/spdk/nvme_zns.h 00:03:07.493 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:07.493 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:07.493 TEST_HEADER include/spdk/nvmf.h 00:03:07.493 TEST_HEADER include/spdk/nvmf_spec.h 00:03:07.493 TEST_HEADER include/spdk/nvmf_transport.h 00:03:07.493 TEST_HEADER include/spdk/opal.h 00:03:07.493 TEST_HEADER include/spdk/opal_spec.h 00:03:07.493 TEST_HEADER include/spdk/pci_ids.h 00:03:07.493 TEST_HEADER include/spdk/pipe.h 00:03:07.493 TEST_HEADER include/spdk/queue.h 00:03:07.493 TEST_HEADER include/spdk/reduce.h 00:03:07.493 TEST_HEADER include/spdk/rpc.h 00:03:07.493 TEST_HEADER include/spdk/scheduler.h 00:03:07.493 TEST_HEADER include/spdk/scsi.h 00:03:07.493 TEST_HEADER include/spdk/scsi_spec.h 00:03:07.493 TEST_HEADER include/spdk/sock.h 00:03:07.493 TEST_HEADER include/spdk/stdinc.h 00:03:07.493 CC test/env/vtophys/vtophys.o 00:03:07.493 TEST_HEADER include/spdk/string.h 00:03:07.493 TEST_HEADER include/spdk/thread.h 00:03:07.494 TEST_HEADER include/spdk/trace.h 00:03:07.494 CC test/env/mem_callbacks/mem_callbacks.o 00:03:07.494 TEST_HEADER include/spdk/trace_parser.h 00:03:07.494 TEST_HEADER include/spdk/tree.h 00:03:07.494 TEST_HEADER include/spdk/ublk.h 00:03:07.494 TEST_HEADER include/spdk/util.h 00:03:07.494 TEST_HEADER include/spdk/uuid.h 00:03:07.494 TEST_HEADER include/spdk/version.h 00:03:07.494 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:07.494 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:07.494 TEST_HEADER include/spdk/vhost.h 00:03:07.494 TEST_HEADER include/spdk/vmd.h 00:03:07.494 TEST_HEADER include/spdk/xor.h 00:03:07.494 TEST_HEADER include/spdk/zipf.h 00:03:07.494 CXX test/cpp_headers/accel.o 00:03:07.494 CC test/event/event_perf/event_perf.o 00:03:07.753 CC test/event/reactor/reactor.o 00:03:07.753 CC test/nvme/aer/aer.o 00:03:07.753 LINK vtophys 00:03:07.753 CC test/rpc_client/rpc_client_test.o 00:03:07.753 LINK hello_sock 00:03:07.753 CXX test/cpp_headers/accel_module.o 00:03:07.753 LINK event_perf 00:03:07.753 CC test/app/histogram_perf/histogram_perf.o 00:03:07.753 LINK reactor 00:03:07.753 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:07.753 CXX test/cpp_headers/assert.o 00:03:07.753 LINK rpc_client_test 00:03:08.011 LINK histogram_perf 00:03:08.011 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:08.011 LINK aer 00:03:08.011 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:08.011 CXX test/cpp_headers/barrier.o 00:03:08.011 LINK mem_callbacks 00:03:08.011 CC examples/accel/perf/accel_perf.o 00:03:08.011 CC test/event/reactor_perf/reactor_perf.o 00:03:08.011 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:08.011 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:08.011 CC test/env/memory/memory_ut.o 00:03:08.270 CXX test/cpp_headers/base64.o 00:03:08.270 LINK reactor_perf 00:03:08.270 CC test/nvme/reset/reset.o 00:03:08.270 CC test/env/pci/pci_ut.o 00:03:08.270 LINK nvme_fuzz 00:03:08.270 LINK env_dpdk_post_init 00:03:08.270 CXX test/cpp_headers/bdev.o 00:03:08.528 CC test/event/app_repeat/app_repeat.o 00:03:08.528 LINK reset 00:03:08.528 CXX test/cpp_headers/bdev_module.o 00:03:08.528 LINK vhost_fuzz 00:03:08.528 CC test/nvme/sgl/sgl.o 00:03:08.528 LINK accel_perf 00:03:08.528 CC test/app/jsoncat/jsoncat.o 00:03:08.528 LINK app_repeat 00:03:08.528 LINK pci_ut 00:03:08.528 LINK jsoncat 00:03:08.528 CXX test/cpp_headers/bdev_zone.o 00:03:08.787 CC test/app/stub/stub.o 00:03:08.787 LINK sgl 00:03:08.787 CC test/nvme/e2edp/nvme_dp.o 00:03:08.787 CXX test/cpp_headers/bit_array.o 00:03:08.787 CC test/event/scheduler/scheduler.o 00:03:08.787 CC test/nvme/overhead/overhead.o 00:03:08.787 LINK stub 00:03:09.047 CC test/nvme/err_injection/err_injection.o 00:03:09.047 CC examples/blob/hello_world/hello_blob.o 00:03:09.047 CC test/nvme/startup/startup.o 00:03:09.047 CXX test/cpp_headers/bit_pool.o 00:03:09.047 LINK nvme_dp 00:03:09.047 LINK scheduler 00:03:09.047 LINK err_injection 00:03:09.047 LINK startup 00:03:09.306 LINK hello_blob 00:03:09.307 CXX test/cpp_headers/blob_bdev.o 00:03:09.307 LINK overhead 00:03:09.307 CC test/accel/dif/dif.o 00:03:09.307 LINK memory_ut 00:03:09.307 CC test/nvme/reserve/reserve.o 00:03:09.307 CC test/nvme/simple_copy/simple_copy.o 00:03:09.307 CC test/nvme/connect_stress/connect_stress.o 00:03:09.307 CXX test/cpp_headers/blobfs_bdev.o 00:03:09.307 CC test/nvme/boot_partition/boot_partition.o 00:03:09.566 CC test/nvme/compliance/nvme_compliance.o 00:03:09.566 LINK reserve 00:03:09.566 CC examples/blob/cli/blobcli.o 00:03:09.566 LINK connect_stress 00:03:09.566 LINK simple_copy 00:03:09.566 CXX test/cpp_headers/blobfs.o 00:03:09.566 LINK boot_partition 00:03:09.566 CC examples/nvme/hello_world/hello_world.o 00:03:09.566 CXX test/cpp_headers/blob.o 00:03:09.829 CC examples/nvme/reconnect/reconnect.o 00:03:09.829 LINK iscsi_fuzz 00:03:09.829 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:09.829 CC examples/nvme/arbitration/arbitration.o 00:03:09.829 CC examples/nvme/hotplug/hotplug.o 00:03:09.829 LINK hello_world 00:03:09.829 LINK nvme_compliance 00:03:09.829 CXX test/cpp_headers/conf.o 00:03:09.829 LINK dif 00:03:09.829 CXX test/cpp_headers/config.o 00:03:10.089 CXX test/cpp_headers/cpuset.o 00:03:10.089 CXX test/cpp_headers/crc16.o 00:03:10.089 LINK blobcli 00:03:10.089 CC test/nvme/fused_ordering/fused_ordering.o 00:03:10.089 LINK hotplug 00:03:10.089 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:10.089 LINK reconnect 00:03:10.089 LINK arbitration 00:03:10.089 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:10.089 CXX test/cpp_headers/crc32.o 00:03:10.089 CC examples/nvme/abort/abort.o 00:03:10.348 LINK fused_ordering 00:03:10.348 LINK doorbell_aers 00:03:10.348 CXX test/cpp_headers/crc64.o 00:03:10.348 LINK nvme_manage 00:03:10.348 LINK cmb_copy 00:03:10.348 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:10.348 CC test/nvme/fdp/fdp.o 00:03:10.348 CXX test/cpp_headers/dif.o 00:03:10.348 CC test/blobfs/mkfs/mkfs.o 00:03:10.607 CC test/nvme/cuse/cuse.o 00:03:10.607 LINK pmr_persistence 00:03:10.607 CC test/lvol/esnap/esnap.o 00:03:10.607 LINK abort 00:03:10.607 CXX test/cpp_headers/dma.o 00:03:10.607 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:10.607 LINK mkfs 00:03:10.607 CC examples/bdev/hello_world/hello_bdev.o 00:03:10.607 CC test/bdev/bdevio/bdevio.o 00:03:10.607 CXX test/cpp_headers/endian.o 00:03:10.607 LINK fdp 00:03:10.607 CXX test/cpp_headers/env_dpdk.o 00:03:10.866 CC examples/bdev/bdevperf/bdevperf.o 00:03:10.866 CXX test/cpp_headers/env.o 00:03:10.866 LINK hello_bdev 00:03:10.866 CXX test/cpp_headers/event.o 00:03:10.866 CXX test/cpp_headers/fd_group.o 00:03:10.866 CXX test/cpp_headers/fd.o 00:03:10.866 LINK hello_fsdev 00:03:10.866 CXX test/cpp_headers/file.o 00:03:10.867 CXX test/cpp_headers/fsdev.o 00:03:10.867 CXX test/cpp_headers/fsdev_module.o 00:03:10.867 CXX test/cpp_headers/ftl.o 00:03:11.126 LINK bdevio 00:03:11.126 CXX test/cpp_headers/fuse_dispatcher.o 00:03:11.126 CXX test/cpp_headers/gpt_spec.o 00:03:11.126 CXX test/cpp_headers/hexlify.o 00:03:11.126 CXX test/cpp_headers/histogram_data.o 00:03:11.126 CXX test/cpp_headers/idxd.o 00:03:11.126 CXX test/cpp_headers/idxd_spec.o 00:03:11.126 CXX test/cpp_headers/init.o 00:03:11.126 CXX test/cpp_headers/ioat.o 00:03:11.126 CXX test/cpp_headers/ioat_spec.o 00:03:11.126 CXX test/cpp_headers/iscsi_spec.o 00:03:11.386 CXX test/cpp_headers/json.o 00:03:11.386 CXX test/cpp_headers/jsonrpc.o 00:03:11.386 CXX test/cpp_headers/keyring.o 00:03:11.386 CXX test/cpp_headers/keyring_module.o 00:03:11.386 CXX test/cpp_headers/likely.o 00:03:11.386 CXX test/cpp_headers/log.o 00:03:11.386 CXX test/cpp_headers/lvol.o 00:03:11.386 CXX test/cpp_headers/md5.o 00:03:11.386 CXX test/cpp_headers/memory.o 00:03:11.386 CXX test/cpp_headers/mmio.o 00:03:11.386 CXX test/cpp_headers/nbd.o 00:03:11.386 CXX test/cpp_headers/net.o 00:03:11.386 CXX test/cpp_headers/notify.o 00:03:11.386 CXX test/cpp_headers/nvme.o 00:03:11.646 CXX test/cpp_headers/nvme_intel.o 00:03:11.646 CXX test/cpp_headers/nvme_ocssd.o 00:03:11.646 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:11.646 CXX test/cpp_headers/nvme_spec.o 00:03:11.646 CXX test/cpp_headers/nvme_zns.o 00:03:11.646 CXX test/cpp_headers/nvmf_cmd.o 00:03:11.646 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:11.646 LINK bdevperf 00:03:11.646 CXX test/cpp_headers/nvmf.o 00:03:11.646 CXX test/cpp_headers/nvmf_spec.o 00:03:11.646 CXX test/cpp_headers/nvmf_transport.o 00:03:11.646 LINK cuse 00:03:11.646 CXX test/cpp_headers/opal.o 00:03:11.646 CXX test/cpp_headers/opal_spec.o 00:03:11.906 CXX test/cpp_headers/pci_ids.o 00:03:11.906 CXX test/cpp_headers/pipe.o 00:03:11.906 CXX test/cpp_headers/queue.o 00:03:11.906 CXX test/cpp_headers/reduce.o 00:03:11.906 CXX test/cpp_headers/rpc.o 00:03:11.906 CXX test/cpp_headers/scheduler.o 00:03:11.906 CXX test/cpp_headers/scsi.o 00:03:11.906 CXX test/cpp_headers/scsi_spec.o 00:03:11.906 CXX test/cpp_headers/sock.o 00:03:11.906 CXX test/cpp_headers/stdinc.o 00:03:11.906 CXX test/cpp_headers/string.o 00:03:11.907 CXX test/cpp_headers/thread.o 00:03:11.907 CXX test/cpp_headers/trace.o 00:03:11.907 CXX test/cpp_headers/trace_parser.o 00:03:12.167 CXX test/cpp_headers/tree.o 00:03:12.167 CXX test/cpp_headers/ublk.o 00:03:12.167 CC examples/nvmf/nvmf/nvmf.o 00:03:12.167 CXX test/cpp_headers/util.o 00:03:12.167 CXX test/cpp_headers/uuid.o 00:03:12.167 CXX test/cpp_headers/version.o 00:03:12.167 CXX test/cpp_headers/vfio_user_pci.o 00:03:12.167 CXX test/cpp_headers/vfio_user_spec.o 00:03:12.167 CXX test/cpp_headers/vhost.o 00:03:12.167 CXX test/cpp_headers/vmd.o 00:03:12.167 CXX test/cpp_headers/xor.o 00:03:12.167 CXX test/cpp_headers/zipf.o 00:03:12.427 LINK nvmf 00:03:16.629 LINK esnap 00:03:16.629 00:03:16.629 real 1m21.788s 00:03:16.629 user 6m54.814s 00:03:16.629 sys 1m39.771s 00:03:16.629 21:34:35 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:16.629 21:34:35 make -- common/autotest_common.sh@10 -- $ set +x 00:03:16.629 ************************************ 00:03:16.629 END TEST make 00:03:16.629 ************************************ 00:03:16.629 21:34:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:16.629 21:34:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:16.629 21:34:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:16.629 21:34:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.629 21:34:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:16.629 21:34:35 -- pm/common@44 -- $ pid=5454 00:03:16.629 21:34:35 -- pm/common@50 -- $ kill -TERM 5454 00:03:16.629 21:34:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.629 21:34:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:16.629 21:34:35 -- pm/common@44 -- $ pid=5456 00:03:16.629 21:34:35 -- pm/common@50 -- $ kill -TERM 5456 00:03:16.629 21:34:35 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:16.629 21:34:35 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:16.629 21:34:35 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:16.629 21:34:35 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:16.629 21:34:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.629 21:34:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.629 21:34:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.629 21:34:35 -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.629 21:34:35 -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.629 21:34:35 -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.629 21:34:35 -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.629 21:34:35 -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.629 21:34:35 -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.629 21:34:35 -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.629 21:34:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.629 21:34:35 -- scripts/common.sh@344 -- # case "$op" in 00:03:16.629 21:34:35 -- scripts/common.sh@345 -- # : 1 00:03:16.629 21:34:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.629 21:34:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.629 21:34:35 -- scripts/common.sh@365 -- # decimal 1 00:03:16.629 21:34:35 -- scripts/common.sh@353 -- # local d=1 00:03:16.629 21:34:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.629 21:34:35 -- scripts/common.sh@355 -- # echo 1 00:03:16.629 21:34:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.629 21:34:35 -- scripts/common.sh@366 -- # decimal 2 00:03:16.629 21:34:35 -- scripts/common.sh@353 -- # local d=2 00:03:16.629 21:34:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.629 21:34:35 -- scripts/common.sh@355 -- # echo 2 00:03:16.629 21:34:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.629 21:34:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.629 21:34:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.629 21:34:35 -- scripts/common.sh@368 -- # return 0 00:03:16.629 21:34:35 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.629 21:34:35 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:16.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.629 --rc genhtml_branch_coverage=1 00:03:16.629 --rc genhtml_function_coverage=1 00:03:16.629 --rc genhtml_legend=1 00:03:16.629 --rc geninfo_all_blocks=1 00:03:16.629 --rc geninfo_unexecuted_blocks=1 00:03:16.629 00:03:16.629 ' 00:03:16.629 21:34:35 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:16.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.629 --rc genhtml_branch_coverage=1 00:03:16.629 --rc genhtml_function_coverage=1 00:03:16.629 --rc genhtml_legend=1 00:03:16.629 --rc geninfo_all_blocks=1 00:03:16.629 --rc geninfo_unexecuted_blocks=1 00:03:16.629 00:03:16.629 ' 00:03:16.629 21:34:35 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:16.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.629 --rc genhtml_branch_coverage=1 00:03:16.629 --rc genhtml_function_coverage=1 00:03:16.629 --rc genhtml_legend=1 00:03:16.630 --rc geninfo_all_blocks=1 00:03:16.630 --rc geninfo_unexecuted_blocks=1 00:03:16.630 00:03:16.630 ' 00:03:16.630 21:34:35 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:16.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.630 --rc genhtml_branch_coverage=1 00:03:16.630 --rc genhtml_function_coverage=1 00:03:16.630 --rc genhtml_legend=1 00:03:16.630 --rc geninfo_all_blocks=1 00:03:16.630 --rc geninfo_unexecuted_blocks=1 00:03:16.630 00:03:16.630 ' 00:03:16.630 21:34:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:16.630 21:34:35 -- nvmf/common.sh@7 -- # uname -s 00:03:16.630 21:34:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.630 21:34:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.630 21:34:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.630 21:34:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.630 21:34:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.630 21:34:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.630 21:34:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.630 21:34:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.630 21:34:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.630 21:34:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.630 21:34:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5370061d-ca0e-42cc-a5d6-16f235e3b196 00:03:16.630 21:34:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5370061d-ca0e-42cc-a5d6-16f235e3b196 00:03:16.630 21:34:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.630 21:34:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.630 21:34:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:16.630 21:34:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:16.630 21:34:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:16.630 21:34:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:16.630 21:34:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.630 21:34:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.630 21:34:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.630 21:34:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.630 21:34:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.630 21:34:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.630 21:34:35 -- paths/export.sh@5 -- # export PATH 00:03:16.630 21:34:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.630 21:34:35 -- nvmf/common.sh@51 -- # : 0 00:03:16.630 21:34:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:16.630 21:34:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:16.630 21:34:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:16.630 21:34:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.630 21:34:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.630 21:34:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:16.630 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:16.630 21:34:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:16.630 21:34:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:16.630 21:34:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:16.630 21:34:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.630 21:34:35 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.630 21:34:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.630 21:34:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.630 21:34:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:16.630 21:34:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.630 21:34:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:16.630 21:34:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.630 21:34:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.630 21:34:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.630 21:34:35 -- spdk/autotest.sh@48 -- # udevadm_pid=54397 00:03:16.630 21:34:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:16.630 21:34:35 -- pm/common@17 -- # local monitor 00:03:16.630 21:34:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.630 21:34:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.630 21:34:35 -- pm/common@21 -- # date +%s 00:03:16.630 21:34:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.630 21:34:35 -- pm/common@25 -- # sleep 1 00:03:16.630 21:34:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727645675 00:03:16.630 21:34:35 -- pm/common@21 -- # date +%s 00:03:16.630 21:34:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727645675 00:03:16.630 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727645675_collect-cpu-load.pm.log 00:03:16.630 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727645675_collect-vmstat.pm.log 00:03:17.570 21:34:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:17.570 21:34:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:17.570 21:34:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:17.570 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:03:17.570 21:34:36 -- spdk/autotest.sh@59 -- # create_test_list 00:03:17.570 21:34:36 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:17.570 21:34:36 -- common/autotest_common.sh@10 -- # set +x 00:03:17.830 21:34:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:17.830 21:34:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:17.830 21:34:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:17.830 21:34:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:17.830 21:34:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:17.830 21:34:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:17.830 21:34:36 -- common/autotest_common.sh@1455 -- # uname 00:03:17.830 21:34:36 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:17.830 21:34:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:17.830 21:34:36 -- common/autotest_common.sh@1475 -- # uname 00:03:17.830 21:34:36 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:17.830 21:34:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:17.830 21:34:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:17.830 lcov: LCOV version 1.15 00:03:17.830 21:34:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:32.721 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:32.721 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:47.652 21:35:05 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:47.652 21:35:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.652 21:35:05 -- common/autotest_common.sh@10 -- # set +x 00:03:47.652 21:35:05 -- spdk/autotest.sh@78 -- # rm -f 00:03:47.652 21:35:05 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.652 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.652 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:47.652 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:47.652 21:35:05 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:47.652 21:35:05 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:47.652 21:35:05 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:47.652 21:35:05 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:47.653 21:35:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.653 21:35:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:47.653 21:35:05 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:47.653 21:35:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.653 21:35:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.653 21:35:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.653 21:35:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:47.653 21:35:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:47.653 21:35:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:47.653 21:35:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.653 21:35:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.653 21:35:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:03:47.653 21:35:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:03:47.653 21:35:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:47.653 21:35:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.653 21:35:05 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:47.653 21:35:05 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:03:47.653 21:35:05 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:03:47.653 21:35:05 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:47.653 21:35:05 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:47.653 21:35:05 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:47.653 21:35:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.653 21:35:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.653 21:35:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:47.653 21:35:05 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:47.653 21:35:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.653 No valid GPT data, bailing 00:03:47.653 21:35:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.653 21:35:06 -- scripts/common.sh@394 -- # pt= 00:03:47.653 21:35:06 -- scripts/common.sh@395 -- # return 1 00:03:47.653 21:35:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.653 1+0 records in 00:03:47.653 1+0 records out 00:03:47.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00482798 s, 217 MB/s 00:03:47.653 21:35:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.653 21:35:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.653 21:35:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:47.653 21:35:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:47.653 21:35:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:47.653 No valid GPT data, bailing 00:03:47.653 21:35:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:47.653 21:35:06 -- scripts/common.sh@394 -- # pt= 00:03:47.653 21:35:06 -- scripts/common.sh@395 -- # return 1 00:03:47.653 21:35:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:47.653 1+0 records in 00:03:47.653 1+0 records out 00:03:47.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00715555 s, 147 MB/s 00:03:47.653 21:35:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.653 21:35:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.653 21:35:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:47.653 21:35:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:47.653 21:35:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:47.653 No valid GPT data, bailing 00:03:47.653 21:35:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:47.653 21:35:06 -- scripts/common.sh@394 -- # pt= 00:03:47.653 21:35:06 -- scripts/common.sh@395 -- # return 1 00:03:47.653 21:35:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:47.653 1+0 records in 00:03:47.653 1+0 records out 00:03:47.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00716261 s, 146 MB/s 00:03:47.653 21:35:06 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.653 21:35:06 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:47.653 21:35:06 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:47.653 21:35:06 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:47.653 21:35:06 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:47.653 No valid GPT data, bailing 00:03:47.653 21:35:06 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:47.653 21:35:06 -- scripts/common.sh@394 -- # pt= 00:03:47.653 21:35:06 -- scripts/common.sh@395 -- # return 1 00:03:47.653 21:35:06 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:47.653 1+0 records in 00:03:47.653 1+0 records out 00:03:47.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00675145 s, 155 MB/s 00:03:47.653 21:35:06 -- spdk/autotest.sh@105 -- # sync 00:03:47.653 21:35:06 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.653 21:35:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.653 21:35:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:50.942 21:35:09 -- spdk/autotest.sh@111 -- # uname -s 00:03:50.942 21:35:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:50.942 21:35:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:50.942 21:35:09 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:51.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.201 Hugepages 00:03:51.201 node hugesize free / total 00:03:51.201 node0 1048576kB 0 / 0 00:03:51.201 node0 2048kB 0 / 0 00:03:51.201 00:03:51.201 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.460 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:51.460 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:51.718 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:51.718 21:35:10 -- spdk/autotest.sh@117 -- # uname -s 00:03:51.718 21:35:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:51.718 21:35:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:51.718 21:35:10 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.656 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.656 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.656 21:35:11 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:53.596 21:35:12 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:53.596 21:35:12 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:53.596 21:35:12 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.596 21:35:12 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:53.596 21:35:12 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:53.596 21:35:12 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:53.596 21:35:12 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.596 21:35:12 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:53.596 21:35:12 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:53.856 21:35:12 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:53.856 21:35:12 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:53.856 21:35:12 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:54.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.425 Waiting for block devices as requested 00:03:54.425 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.425 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:54.425 21:35:13 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:54.425 21:35:13 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:54.425 21:35:13 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:03:54.426 21:35:13 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:54.426 21:35:13 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:54.426 21:35:13 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:54.426 21:35:13 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:54.426 21:35:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:54.426 21:35:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:03:54.426 21:35:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:03:54.426 21:35:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:54.426 21:35:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:03:54.685 21:35:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:54.685 21:35:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:54.685 21:35:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:54.685 21:35:13 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:54.685 21:35:13 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:03:54.685 21:35:13 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:54.685 21:35:13 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:54.685 21:35:13 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:54.685 21:35:13 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:54.685 21:35:13 -- common/autotest_common.sh@1541 -- # continue 00:03:54.685 21:35:13 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:54.685 21:35:13 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:54.685 21:35:13 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:54.685 21:35:13 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:03:54.685 21:35:13 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:54.685 21:35:13 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:54.685 21:35:13 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:54.685 21:35:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:54.685 21:35:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:54.685 21:35:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:54.685 21:35:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:54.685 21:35:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:54.685 21:35:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:54.685 21:35:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:03:54.685 21:35:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:54.685 21:35:13 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:54.685 21:35:13 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:54.685 21:35:13 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:54.685 21:35:13 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:54.685 21:35:13 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:54.685 21:35:13 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:54.685 21:35:13 -- common/autotest_common.sh@1541 -- # continue 00:03:54.686 21:35:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:54.686 21:35:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:54.686 21:35:13 -- common/autotest_common.sh@10 -- # set +x 00:03:54.686 21:35:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:54.686 21:35:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:54.686 21:35:13 -- common/autotest_common.sh@10 -- # set +x 00:03:54.686 21:35:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.626 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.626 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:55.626 21:35:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:55.626 21:35:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:55.626 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:03:55.886 21:35:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:55.886 21:35:14 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:55.886 21:35:14 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:55.886 21:35:14 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:55.886 21:35:14 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:55.886 21:35:14 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:55.886 21:35:14 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:55.886 21:35:14 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:55.886 21:35:14 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:55.886 21:35:14 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:55.886 21:35:14 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.886 21:35:14 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:55.886 21:35:14 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:55.886 21:35:14 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:03:55.886 21:35:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:55.886 21:35:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.886 21:35:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:55.886 21:35:14 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.886 21:35:14 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.886 21:35:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:55.886 21:35:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:55.886 21:35:14 -- common/autotest_common.sh@1564 -- # device=0x0010 00:03:55.886 21:35:14 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:55.886 21:35:14 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:55.886 21:35:14 -- common/autotest_common.sh@1570 -- # return 0 00:03:55.886 21:35:14 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:55.886 21:35:14 -- common/autotest_common.sh@1578 -- # return 0 00:03:55.886 21:35:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:55.886 21:35:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:55.886 21:35:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.886 21:35:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:55.886 21:35:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:55.886 21:35:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:55.886 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:03:55.886 21:35:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:55.886 21:35:14 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:55.886 21:35:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:55.886 21:35:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.886 21:35:14 -- common/autotest_common.sh@10 -- # set +x 00:03:55.886 ************************************ 00:03:55.886 START TEST env 00:03:55.886 ************************************ 00:03:55.886 21:35:14 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:56.146 * Looking for test storage... 00:03:56.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:56.146 21:35:14 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:56.146 21:35:14 env -- common/autotest_common.sh@1681 -- # lcov --version 00:03:56.146 21:35:14 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:56.146 21:35:14 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:56.146 21:35:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.146 21:35:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.146 21:35:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.146 21:35:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.146 21:35:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.146 21:35:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.146 21:35:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.146 21:35:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.146 21:35:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.146 21:35:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.146 21:35:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.147 21:35:14 env -- scripts/common.sh@344 -- # case "$op" in 00:03:56.147 21:35:14 env -- scripts/common.sh@345 -- # : 1 00:03:56.147 21:35:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.147 21:35:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.147 21:35:14 env -- scripts/common.sh@365 -- # decimal 1 00:03:56.147 21:35:14 env -- scripts/common.sh@353 -- # local d=1 00:03:56.147 21:35:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.147 21:35:14 env -- scripts/common.sh@355 -- # echo 1 00:03:56.147 21:35:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.147 21:35:14 env -- scripts/common.sh@366 -- # decimal 2 00:03:56.147 21:35:14 env -- scripts/common.sh@353 -- # local d=2 00:03:56.147 21:35:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.147 21:35:15 env -- scripts/common.sh@355 -- # echo 2 00:03:56.147 21:35:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.147 21:35:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.147 21:35:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.147 21:35:15 env -- scripts/common.sh@368 -- # return 0 00:03:56.147 21:35:15 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.147 21:35:15 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.147 --rc genhtml_branch_coverage=1 00:03:56.147 --rc genhtml_function_coverage=1 00:03:56.147 --rc genhtml_legend=1 00:03:56.147 --rc geninfo_all_blocks=1 00:03:56.147 --rc geninfo_unexecuted_blocks=1 00:03:56.147 00:03:56.147 ' 00:03:56.147 21:35:15 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.147 --rc genhtml_branch_coverage=1 00:03:56.147 --rc genhtml_function_coverage=1 00:03:56.147 --rc genhtml_legend=1 00:03:56.147 --rc geninfo_all_blocks=1 00:03:56.147 --rc geninfo_unexecuted_blocks=1 00:03:56.147 00:03:56.147 ' 00:03:56.147 21:35:15 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.147 --rc genhtml_branch_coverage=1 00:03:56.147 --rc genhtml_function_coverage=1 00:03:56.147 --rc genhtml_legend=1 00:03:56.147 --rc geninfo_all_blocks=1 00:03:56.147 --rc geninfo_unexecuted_blocks=1 00:03:56.147 00:03:56.147 ' 00:03:56.147 21:35:15 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:56.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.147 --rc genhtml_branch_coverage=1 00:03:56.147 --rc genhtml_function_coverage=1 00:03:56.147 --rc genhtml_legend=1 00:03:56.147 --rc geninfo_all_blocks=1 00:03:56.147 --rc geninfo_unexecuted_blocks=1 00:03:56.147 00:03:56.147 ' 00:03:56.147 21:35:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:56.147 21:35:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.147 21:35:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.147 21:35:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.147 ************************************ 00:03:56.147 START TEST env_memory 00:03:56.147 ************************************ 00:03:56.147 21:35:15 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:56.147 00:03:56.147 00:03:56.147 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.147 http://cunit.sourceforge.net/ 00:03:56.147 00:03:56.147 00:03:56.147 Suite: memory 00:03:56.147 Test: alloc and free memory map ...[2024-09-29 21:35:15.098354] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:56.414 passed 00:03:56.414 Test: mem map translation ...[2024-09-29 21:35:15.138584] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:56.414 [2024-09-29 21:35:15.138625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:56.414 [2024-09-29 21:35:15.138693] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:56.414 [2024-09-29 21:35:15.138711] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:56.414 passed 00:03:56.414 Test: mem map registration ...[2024-09-29 21:35:15.199900] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:56.414 [2024-09-29 21:35:15.199935] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:56.414 passed 00:03:56.414 Test: mem map adjacent registrations ...passed 00:03:56.414 00:03:56.414 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.414 suites 1 1 n/a 0 0 00:03:56.414 tests 4 4 4 0 0 00:03:56.414 asserts 152 152 152 0 n/a 00:03:56.414 00:03:56.414 Elapsed time = 0.234 seconds 00:03:56.414 00:03:56.414 real 0m0.294s 00:03:56.414 user 0m0.252s 00:03:56.414 sys 0m0.031s 00:03:56.414 21:35:15 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.414 21:35:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.414 ************************************ 00:03:56.414 END TEST env_memory 00:03:56.414 ************************************ 00:03:56.414 21:35:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:56.414 21:35:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.414 21:35:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.414 21:35:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.415 ************************************ 00:03:56.415 START TEST env_vtophys 00:03:56.415 ************************************ 00:03:56.415 21:35:15 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:56.684 EAL: lib.eal log level changed from notice to debug 00:03:56.684 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.685 EAL: Detected lcore 1 as core 0 on socket 0 00:03:56.685 EAL: Detected lcore 2 as core 0 on socket 0 00:03:56.685 EAL: Detected lcore 3 as core 0 on socket 0 00:03:56.685 EAL: Detected lcore 4 as core 0 on socket 0 00:03:56.685 EAL: Detected lcore 5 as core 0 on socket 0 00:03:56.685 EAL: Detected lcore 6 as core 0 on socket 0 00:03:56.685 EAL: Detected lcore 7 as core 0 on socket 0 00:03:56.685 EAL: Detected lcore 8 as core 0 on socket 0 00:03:56.685 EAL: Detected lcore 9 as core 0 on socket 0 00:03:56.685 EAL: Maximum logical cores by configuration: 128 00:03:56.685 EAL: Detected CPU lcores: 10 00:03:56.685 EAL: Detected NUMA nodes: 1 00:03:56.685 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:56.685 EAL: Detected shared linkage of DPDK 00:03:56.685 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.685 EAL: Selected IOVA mode 'PA' 00:03:56.685 EAL: Probing VFIO support... 00:03:56.685 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:56.685 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:56.685 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.685 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.685 EAL: Setting up physically contiguous memory... 00:03:56.685 EAL: Setting maximum number of open files to 524288 00:03:56.685 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.685 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.685 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.685 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.685 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.685 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.685 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.685 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.685 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.685 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.685 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.685 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.685 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.685 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.685 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.685 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.685 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.685 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.685 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.685 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.685 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.685 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.685 EAL: Hugepages will be freed exactly as allocated. 00:03:56.685 EAL: No shared files mode enabled, IPC is disabled 00:03:56.685 EAL: No shared files mode enabled, IPC is disabled 00:03:56.685 EAL: TSC frequency is ~2290000 KHz 00:03:56.685 EAL: Main lcore 0 is ready (tid=7f585f84ca40;cpuset=[0]) 00:03:56.685 EAL: Trying to obtain current memory policy. 00:03:56.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.685 EAL: Restoring previous memory policy: 0 00:03:56.685 EAL: request: mp_malloc_sync 00:03:56.685 EAL: No shared files mode enabled, IPC is disabled 00:03:56.685 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.685 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:56.685 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.685 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.685 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:56.685 00:03:56.685 00:03:56.685 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.685 http://cunit.sourceforge.net/ 00:03:56.685 00:03:56.685 00:03:56.685 Suite: components_suite 00:03:57.257 Test: vtophys_malloc_test ...passed 00:03:57.257 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:57.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.257 EAL: Restoring previous memory policy: 4 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was expanded by 4MB 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was shrunk by 4MB 00:03:57.257 EAL: Trying to obtain current memory policy. 00:03:57.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.257 EAL: Restoring previous memory policy: 4 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was expanded by 6MB 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was shrunk by 6MB 00:03:57.257 EAL: Trying to obtain current memory policy. 00:03:57.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.257 EAL: Restoring previous memory policy: 4 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was expanded by 10MB 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was shrunk by 10MB 00:03:57.257 EAL: Trying to obtain current memory policy. 00:03:57.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.257 EAL: Restoring previous memory policy: 4 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was expanded by 18MB 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was shrunk by 18MB 00:03:57.257 EAL: Trying to obtain current memory policy. 00:03:57.257 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.257 EAL: Restoring previous memory policy: 4 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was expanded by 34MB 00:03:57.257 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.257 EAL: request: mp_malloc_sync 00:03:57.257 EAL: No shared files mode enabled, IPC is disabled 00:03:57.257 EAL: Heap on socket 0 was shrunk by 34MB 00:03:57.517 EAL: Trying to obtain current memory policy. 00:03:57.517 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.517 EAL: Restoring previous memory policy: 4 00:03:57.517 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.517 EAL: request: mp_malloc_sync 00:03:57.517 EAL: No shared files mode enabled, IPC is disabled 00:03:57.517 EAL: Heap on socket 0 was expanded by 66MB 00:03:57.517 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.517 EAL: request: mp_malloc_sync 00:03:57.517 EAL: No shared files mode enabled, IPC is disabled 00:03:57.517 EAL: Heap on socket 0 was shrunk by 66MB 00:03:57.777 EAL: Trying to obtain current memory policy. 00:03:57.777 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.777 EAL: Restoring previous memory policy: 4 00:03:57.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.777 EAL: request: mp_malloc_sync 00:03:57.777 EAL: No shared files mode enabled, IPC is disabled 00:03:57.777 EAL: Heap on socket 0 was expanded by 130MB 00:03:58.037 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.037 EAL: request: mp_malloc_sync 00:03:58.037 EAL: No shared files mode enabled, IPC is disabled 00:03:58.037 EAL: Heap on socket 0 was shrunk by 130MB 00:03:58.297 EAL: Trying to obtain current memory policy. 00:03:58.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:58.297 EAL: Restoring previous memory policy: 4 00:03:58.297 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.297 EAL: request: mp_malloc_sync 00:03:58.297 EAL: No shared files mode enabled, IPC is disabled 00:03:58.297 EAL: Heap on socket 0 was expanded by 258MB 00:03:58.866 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.866 EAL: request: mp_malloc_sync 00:03:58.866 EAL: No shared files mode enabled, IPC is disabled 00:03:58.866 EAL: Heap on socket 0 was shrunk by 258MB 00:03:59.126 EAL: Trying to obtain current memory policy. 00:03:59.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:59.385 EAL: Restoring previous memory policy: 4 00:03:59.385 EAL: Calling mem event callback 'spdk:(nil)' 00:03:59.385 EAL: request: mp_malloc_sync 00:03:59.385 EAL: No shared files mode enabled, IPC is disabled 00:03:59.385 EAL: Heap on socket 0 was expanded by 514MB 00:04:00.323 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.583 EAL: request: mp_malloc_sync 00:04:00.583 EAL: No shared files mode enabled, IPC is disabled 00:04:00.583 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.522 EAL: Trying to obtain current memory policy. 00:04:01.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.782 EAL: Restoring previous memory policy: 4 00:04:01.782 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.782 EAL: request: mp_malloc_sync 00:04:01.782 EAL: No shared files mode enabled, IPC is disabled 00:04:01.782 EAL: Heap on socket 0 was expanded by 1026MB 00:04:03.693 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.953 EAL: request: mp_malloc_sync 00:04:03.953 EAL: No shared files mode enabled, IPC is disabled 00:04:03.953 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:05.335 passed 00:04:05.335 00:04:05.335 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.335 suites 1 1 n/a 0 0 00:04:05.335 tests 2 2 2 0 0 00:04:05.335 asserts 5782 5782 5782 0 n/a 00:04:05.335 00:04:05.335 Elapsed time = 8.618 seconds 00:04:05.335 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.335 EAL: request: mp_malloc_sync 00:04:05.335 EAL: No shared files mode enabled, IPC is disabled 00:04:05.335 EAL: Heap on socket 0 was shrunk by 2MB 00:04:05.335 EAL: No shared files mode enabled, IPC is disabled 00:04:05.335 EAL: No shared files mode enabled, IPC is disabled 00:04:05.335 EAL: No shared files mode enabled, IPC is disabled 00:04:05.335 00:04:05.335 real 0m8.926s 00:04:05.335 user 0m7.562s 00:04:05.335 sys 0m1.213s 00:04:05.335 21:35:24 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.335 21:35:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:05.335 ************************************ 00:04:05.335 END TEST env_vtophys 00:04:05.335 ************************************ 00:04:05.595 21:35:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.595 21:35:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:05.595 21:35:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.595 21:35:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.595 ************************************ 00:04:05.595 START TEST env_pci 00:04:05.595 ************************************ 00:04:05.595 21:35:24 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.595 00:04:05.595 00:04:05.595 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.595 http://cunit.sourceforge.net/ 00:04:05.595 00:04:05.595 00:04:05.595 Suite: pci 00:04:05.595 Test: pci_hook ...[2024-09-29 21:35:24.417760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56703 has claimed it 00:04:05.595 passed 00:04:05.595 00:04:05.595 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.595 suites 1 1 n/a 0 0 00:04:05.595 tests 1 1 1 0 0 00:04:05.595 asserts 25 25 25 0 n/a 00:04:05.595 00:04:05.595 Elapsed time = 0.005 seconds 00:04:05.595 EAL: Cannot find device (10000:00:01.0) 00:04:05.595 EAL: Failed to attach device on primary process 00:04:05.595 00:04:05.595 real 0m0.103s 00:04:05.595 user 0m0.045s 00:04:05.595 sys 0m0.057s 00:04:05.595 21:35:24 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.595 21:35:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:05.595 ************************************ 00:04:05.595 END TEST env_pci 00:04:05.595 ************************************ 00:04:05.595 21:35:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.595 21:35:24 env -- env/env.sh@15 -- # uname 00:04:05.595 21:35:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.595 21:35:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.596 21:35:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.596 21:35:24 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:05.596 21:35:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:05.596 21:35:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.596 ************************************ 00:04:05.596 START TEST env_dpdk_post_init 00:04:05.596 ************************************ 00:04:05.596 21:35:24 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.856 EAL: Detected CPU lcores: 10 00:04:05.856 EAL: Detected NUMA nodes: 1 00:04:05.856 EAL: Detected shared linkage of DPDK 00:04:05.856 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:05.856 EAL: Selected IOVA mode 'PA' 00:04:05.856 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:05.856 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:05.856 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:05.856 Starting DPDK initialization... 00:04:05.856 Starting SPDK post initialization... 00:04:05.856 SPDK NVMe probe 00:04:05.856 Attaching to 0000:00:10.0 00:04:05.856 Attaching to 0000:00:11.0 00:04:05.856 Attached to 0000:00:10.0 00:04:05.856 Attached to 0000:00:11.0 00:04:05.856 Cleaning up... 00:04:05.856 00:04:05.856 real 0m0.272s 00:04:05.856 user 0m0.082s 00:04:05.856 sys 0m0.092s 00:04:05.856 21:35:24 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:05.856 21:35:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.856 ************************************ 00:04:05.856 END TEST env_dpdk_post_init 00:04:05.857 ************************************ 00:04:06.117 21:35:24 env -- env/env.sh@26 -- # uname 00:04:06.117 21:35:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.117 21:35:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.117 21:35:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.117 21:35:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.117 21:35:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.117 ************************************ 00:04:06.117 START TEST env_mem_callbacks 00:04:06.117 ************************************ 00:04:06.117 21:35:24 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.117 EAL: Detected CPU lcores: 10 00:04:06.117 EAL: Detected NUMA nodes: 1 00:04:06.117 EAL: Detected shared linkage of DPDK 00:04:06.117 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.117 EAL: Selected IOVA mode 'PA' 00:04:06.117 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.117 00:04:06.117 00:04:06.117 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.117 http://cunit.sourceforge.net/ 00:04:06.117 00:04:06.117 00:04:06.117 Suite: memory 00:04:06.117 Test: test ... 00:04:06.117 register 0x200000200000 2097152 00:04:06.117 malloc 3145728 00:04:06.117 register 0x200000400000 4194304 00:04:06.117 buf 0x2000004fffc0 len 3145728 PASSED 00:04:06.117 malloc 64 00:04:06.117 buf 0x2000004ffec0 len 64 PASSED 00:04:06.117 malloc 4194304 00:04:06.117 register 0x200000800000 6291456 00:04:06.117 buf 0x2000009fffc0 len 4194304 PASSED 00:04:06.117 free 0x2000004fffc0 3145728 00:04:06.378 free 0x2000004ffec0 64 00:04:06.378 unregister 0x200000400000 4194304 PASSED 00:04:06.378 free 0x2000009fffc0 4194304 00:04:06.378 unregister 0x200000800000 6291456 PASSED 00:04:06.378 malloc 8388608 00:04:06.378 register 0x200000400000 10485760 00:04:06.378 buf 0x2000005fffc0 len 8388608 PASSED 00:04:06.378 free 0x2000005fffc0 8388608 00:04:06.378 unregister 0x200000400000 10485760 PASSED 00:04:06.378 passed 00:04:06.378 00:04:06.378 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.378 suites 1 1 n/a 0 0 00:04:06.378 tests 1 1 1 0 0 00:04:06.378 asserts 15 15 15 0 n/a 00:04:06.378 00:04:06.378 Elapsed time = 0.080 seconds 00:04:06.378 00:04:06.378 real 0m0.275s 00:04:06.378 user 0m0.105s 00:04:06.378 sys 0m0.068s 00:04:06.378 21:35:25 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.378 21:35:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:06.378 ************************************ 00:04:06.378 END TEST env_mem_callbacks 00:04:06.378 ************************************ 00:04:06.378 00:04:06.378 real 0m10.454s 00:04:06.378 user 0m8.255s 00:04:06.378 sys 0m1.851s 00:04:06.378 21:35:25 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.378 21:35:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.378 ************************************ 00:04:06.378 END TEST env 00:04:06.378 ************************************ 00:04:06.378 21:35:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:06.378 21:35:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.378 21:35:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.378 21:35:25 -- common/autotest_common.sh@10 -- # set +x 00:04:06.378 ************************************ 00:04:06.378 START TEST rpc 00:04:06.378 ************************************ 00:04:06.378 21:35:25 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:06.639 * Looking for test storage... 00:04:06.639 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.639 21:35:25 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:06.639 21:35:25 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:06.639 21:35:25 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:06.639 21:35:25 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:06.639 21:35:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.639 21:35:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.639 21:35:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.639 21:35:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.639 21:35:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.639 21:35:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.639 21:35:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.639 21:35:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.639 21:35:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.639 21:35:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.639 21:35:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.639 21:35:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:06.639 21:35:25 rpc -- scripts/common.sh@345 -- # : 1 00:04:06.639 21:35:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.639 21:35:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.639 21:35:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:06.639 21:35:25 rpc -- scripts/common.sh@353 -- # local d=1 00:04:06.639 21:35:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.639 21:35:25 rpc -- scripts/common.sh@355 -- # echo 1 00:04:06.639 21:35:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.639 21:35:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:06.639 21:35:25 rpc -- scripts/common.sh@353 -- # local d=2 00:04:06.639 21:35:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.639 21:35:25 rpc -- scripts/common.sh@355 -- # echo 2 00:04:06.639 21:35:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.639 21:35:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.639 21:35:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.639 21:35:25 rpc -- scripts/common.sh@368 -- # return 0 00:04:06.639 21:35:25 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.639 21:35:25 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:06.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.639 --rc genhtml_branch_coverage=1 00:04:06.639 --rc genhtml_function_coverage=1 00:04:06.639 --rc genhtml_legend=1 00:04:06.639 --rc geninfo_all_blocks=1 00:04:06.639 --rc geninfo_unexecuted_blocks=1 00:04:06.639 00:04:06.639 ' 00:04:06.639 21:35:25 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:06.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.639 --rc genhtml_branch_coverage=1 00:04:06.639 --rc genhtml_function_coverage=1 00:04:06.639 --rc genhtml_legend=1 00:04:06.639 --rc geninfo_all_blocks=1 00:04:06.639 --rc geninfo_unexecuted_blocks=1 00:04:06.639 00:04:06.639 ' 00:04:06.640 21:35:25 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:06.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.640 --rc genhtml_branch_coverage=1 00:04:06.640 --rc genhtml_function_coverage=1 00:04:06.640 --rc genhtml_legend=1 00:04:06.640 --rc geninfo_all_blocks=1 00:04:06.640 --rc geninfo_unexecuted_blocks=1 00:04:06.640 00:04:06.640 ' 00:04:06.640 21:35:25 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:06.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.640 --rc genhtml_branch_coverage=1 00:04:06.640 --rc genhtml_function_coverage=1 00:04:06.640 --rc genhtml_legend=1 00:04:06.640 --rc geninfo_all_blocks=1 00:04:06.640 --rc geninfo_unexecuted_blocks=1 00:04:06.640 00:04:06.640 ' 00:04:06.640 21:35:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56830 00:04:06.640 21:35:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:06.640 21:35:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.640 21:35:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56830 00:04:06.640 21:35:25 rpc -- common/autotest_common.sh@831 -- # '[' -z 56830 ']' 00:04:06.640 21:35:25 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.640 21:35:25 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:06.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.640 21:35:25 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.640 21:35:25 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:06.640 21:35:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.900 [2024-09-29 21:35:25.635045] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:06.900 [2024-09-29 21:35:25.635182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56830 ] 00:04:06.900 [2024-09-29 21:35:25.803356] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.160 [2024-09-29 21:35:26.047492] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:07.160 [2024-09-29 21:35:26.047550] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56830' to capture a snapshot of events at runtime. 00:04:07.160 [2024-09-29 21:35:26.047559] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:07.160 [2024-09-29 21:35:26.047570] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:07.160 [2024-09-29 21:35:26.047577] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56830 for offline analysis/debug. 00:04:07.160 [2024-09-29 21:35:26.047616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.100 21:35:27 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.100 21:35:27 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:08.100 21:35:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.100 21:35:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:08.100 21:35:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.100 21:35:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.100 21:35:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.100 21:35:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.100 21:35:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.100 ************************************ 00:04:08.100 START TEST rpc_integrity 00:04:08.100 ************************************ 00:04:08.100 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:08.100 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.100 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.100 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.100 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.100 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.100 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.360 { 00:04:08.360 "name": "Malloc0", 00:04:08.360 "aliases": [ 00:04:08.360 "6f69a541-7088-4c09-9d1d-84761e321e62" 00:04:08.360 ], 00:04:08.360 "product_name": "Malloc disk", 00:04:08.360 "block_size": 512, 00:04:08.360 "num_blocks": 16384, 00:04:08.360 "uuid": "6f69a541-7088-4c09-9d1d-84761e321e62", 00:04:08.360 "assigned_rate_limits": { 00:04:08.360 "rw_ios_per_sec": 0, 00:04:08.360 "rw_mbytes_per_sec": 0, 00:04:08.360 "r_mbytes_per_sec": 0, 00:04:08.360 "w_mbytes_per_sec": 0 00:04:08.360 }, 00:04:08.360 "claimed": false, 00:04:08.360 "zoned": false, 00:04:08.360 "supported_io_types": { 00:04:08.360 "read": true, 00:04:08.360 "write": true, 00:04:08.360 "unmap": true, 00:04:08.360 "flush": true, 00:04:08.360 "reset": true, 00:04:08.360 "nvme_admin": false, 00:04:08.360 "nvme_io": false, 00:04:08.360 "nvme_io_md": false, 00:04:08.360 "write_zeroes": true, 00:04:08.360 "zcopy": true, 00:04:08.360 "get_zone_info": false, 00:04:08.360 "zone_management": false, 00:04:08.360 "zone_append": false, 00:04:08.360 "compare": false, 00:04:08.360 "compare_and_write": false, 00:04:08.360 "abort": true, 00:04:08.360 "seek_hole": false, 00:04:08.360 "seek_data": false, 00:04:08.360 "copy": true, 00:04:08.360 "nvme_iov_md": false 00:04:08.360 }, 00:04:08.360 "memory_domains": [ 00:04:08.360 { 00:04:08.360 "dma_device_id": "system", 00:04:08.360 "dma_device_type": 1 00:04:08.360 }, 00:04:08.360 { 00:04:08.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.360 "dma_device_type": 2 00:04:08.360 } 00:04:08.360 ], 00:04:08.360 "driver_specific": {} 00:04:08.360 } 00:04:08.360 ]' 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.360 [2024-09-29 21:35:27.210731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:08.360 [2024-09-29 21:35:27.210811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.360 [2024-09-29 21:35:27.210836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:08.360 [2024-09-29 21:35:27.210848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.360 [2024-09-29 21:35:27.213382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.360 [2024-09-29 21:35:27.213425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.360 Passthru0 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.360 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.360 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.360 { 00:04:08.360 "name": "Malloc0", 00:04:08.360 "aliases": [ 00:04:08.360 "6f69a541-7088-4c09-9d1d-84761e321e62" 00:04:08.360 ], 00:04:08.360 "product_name": "Malloc disk", 00:04:08.360 "block_size": 512, 00:04:08.360 "num_blocks": 16384, 00:04:08.360 "uuid": "6f69a541-7088-4c09-9d1d-84761e321e62", 00:04:08.360 "assigned_rate_limits": { 00:04:08.360 "rw_ios_per_sec": 0, 00:04:08.360 "rw_mbytes_per_sec": 0, 00:04:08.360 "r_mbytes_per_sec": 0, 00:04:08.360 "w_mbytes_per_sec": 0 00:04:08.360 }, 00:04:08.360 "claimed": true, 00:04:08.360 "claim_type": "exclusive_write", 00:04:08.360 "zoned": false, 00:04:08.360 "supported_io_types": { 00:04:08.360 "read": true, 00:04:08.360 "write": true, 00:04:08.360 "unmap": true, 00:04:08.360 "flush": true, 00:04:08.360 "reset": true, 00:04:08.360 "nvme_admin": false, 00:04:08.360 "nvme_io": false, 00:04:08.360 "nvme_io_md": false, 00:04:08.360 "write_zeroes": true, 00:04:08.360 "zcopy": true, 00:04:08.360 "get_zone_info": false, 00:04:08.360 "zone_management": false, 00:04:08.360 "zone_append": false, 00:04:08.360 "compare": false, 00:04:08.360 "compare_and_write": false, 00:04:08.360 "abort": true, 00:04:08.360 "seek_hole": false, 00:04:08.360 "seek_data": false, 00:04:08.360 "copy": true, 00:04:08.360 "nvme_iov_md": false 00:04:08.360 }, 00:04:08.360 "memory_domains": [ 00:04:08.360 { 00:04:08.360 "dma_device_id": "system", 00:04:08.360 "dma_device_type": 1 00:04:08.360 }, 00:04:08.360 { 00:04:08.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.360 "dma_device_type": 2 00:04:08.360 } 00:04:08.360 ], 00:04:08.360 "driver_specific": {} 00:04:08.360 }, 00:04:08.360 { 00:04:08.360 "name": "Passthru0", 00:04:08.360 "aliases": [ 00:04:08.360 "2f98afc6-3cf5-5fb0-99f8-c4fe06028076" 00:04:08.360 ], 00:04:08.360 "product_name": "passthru", 00:04:08.360 "block_size": 512, 00:04:08.360 "num_blocks": 16384, 00:04:08.360 "uuid": "2f98afc6-3cf5-5fb0-99f8-c4fe06028076", 00:04:08.360 "assigned_rate_limits": { 00:04:08.360 "rw_ios_per_sec": 0, 00:04:08.360 "rw_mbytes_per_sec": 0, 00:04:08.360 "r_mbytes_per_sec": 0, 00:04:08.360 "w_mbytes_per_sec": 0 00:04:08.360 }, 00:04:08.361 "claimed": false, 00:04:08.361 "zoned": false, 00:04:08.361 "supported_io_types": { 00:04:08.361 "read": true, 00:04:08.361 "write": true, 00:04:08.361 "unmap": true, 00:04:08.361 "flush": true, 00:04:08.361 "reset": true, 00:04:08.361 "nvme_admin": false, 00:04:08.361 "nvme_io": false, 00:04:08.361 "nvme_io_md": false, 00:04:08.361 "write_zeroes": true, 00:04:08.361 "zcopy": true, 00:04:08.361 "get_zone_info": false, 00:04:08.361 "zone_management": false, 00:04:08.361 "zone_append": false, 00:04:08.361 "compare": false, 00:04:08.361 "compare_and_write": false, 00:04:08.361 "abort": true, 00:04:08.361 "seek_hole": false, 00:04:08.361 "seek_data": false, 00:04:08.361 "copy": true, 00:04:08.361 "nvme_iov_md": false 00:04:08.361 }, 00:04:08.361 "memory_domains": [ 00:04:08.361 { 00:04:08.361 "dma_device_id": "system", 00:04:08.361 "dma_device_type": 1 00:04:08.361 }, 00:04:08.361 { 00:04:08.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.361 "dma_device_type": 2 00:04:08.361 } 00:04:08.361 ], 00:04:08.361 "driver_specific": { 00:04:08.361 "passthru": { 00:04:08.361 "name": "Passthru0", 00:04:08.361 "base_bdev_name": "Malloc0" 00:04:08.361 } 00:04:08.361 } 00:04:08.361 } 00:04:08.361 ]' 00:04:08.361 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.361 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.361 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.361 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.361 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.361 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.361 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:08.361 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.361 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.361 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.361 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.361 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.361 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.620 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.620 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.620 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.620 21:35:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.620 00:04:08.620 real 0m0.348s 00:04:08.620 user 0m0.182s 00:04:08.620 sys 0m0.058s 00:04:08.620 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.620 21:35:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.620 ************************************ 00:04:08.620 END TEST rpc_integrity 00:04:08.620 ************************************ 00:04:08.620 21:35:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:08.620 21:35:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.620 21:35:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.620 21:35:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.620 ************************************ 00:04:08.620 START TEST rpc_plugins 00:04:08.620 ************************************ 00:04:08.620 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:08.620 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:08.620 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.620 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.620 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.620 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:08.621 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:08.621 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.621 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.621 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.621 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:08.621 { 00:04:08.621 "name": "Malloc1", 00:04:08.621 "aliases": [ 00:04:08.621 "62634b5e-c572-4593-bb67-2b43f00f96d3" 00:04:08.621 ], 00:04:08.621 "product_name": "Malloc disk", 00:04:08.621 "block_size": 4096, 00:04:08.621 "num_blocks": 256, 00:04:08.621 "uuid": "62634b5e-c572-4593-bb67-2b43f00f96d3", 00:04:08.621 "assigned_rate_limits": { 00:04:08.621 "rw_ios_per_sec": 0, 00:04:08.621 "rw_mbytes_per_sec": 0, 00:04:08.621 "r_mbytes_per_sec": 0, 00:04:08.621 "w_mbytes_per_sec": 0 00:04:08.621 }, 00:04:08.621 "claimed": false, 00:04:08.621 "zoned": false, 00:04:08.621 "supported_io_types": { 00:04:08.621 "read": true, 00:04:08.621 "write": true, 00:04:08.621 "unmap": true, 00:04:08.621 "flush": true, 00:04:08.621 "reset": true, 00:04:08.621 "nvme_admin": false, 00:04:08.621 "nvme_io": false, 00:04:08.621 "nvme_io_md": false, 00:04:08.621 "write_zeroes": true, 00:04:08.621 "zcopy": true, 00:04:08.621 "get_zone_info": false, 00:04:08.621 "zone_management": false, 00:04:08.621 "zone_append": false, 00:04:08.621 "compare": false, 00:04:08.621 "compare_and_write": false, 00:04:08.621 "abort": true, 00:04:08.621 "seek_hole": false, 00:04:08.621 "seek_data": false, 00:04:08.621 "copy": true, 00:04:08.621 "nvme_iov_md": false 00:04:08.621 }, 00:04:08.621 "memory_domains": [ 00:04:08.621 { 00:04:08.621 "dma_device_id": "system", 00:04:08.621 "dma_device_type": 1 00:04:08.621 }, 00:04:08.621 { 00:04:08.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.621 "dma_device_type": 2 00:04:08.621 } 00:04:08.621 ], 00:04:08.621 "driver_specific": {} 00:04:08.621 } 00:04:08.621 ]' 00:04:08.621 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:08.621 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:08.621 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:08.621 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.621 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.621 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.621 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:08.621 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.621 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.621 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.621 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:08.621 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:08.880 ************************************ 00:04:08.880 END TEST rpc_plugins 00:04:08.880 ************************************ 00:04:08.880 21:35:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:08.880 00:04:08.880 real 0m0.161s 00:04:08.880 user 0m0.091s 00:04:08.880 sys 0m0.029s 00:04:08.880 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.880 21:35:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.880 21:35:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:08.880 21:35:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.880 21:35:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.880 21:35:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.880 ************************************ 00:04:08.880 START TEST rpc_trace_cmd_test 00:04:08.881 ************************************ 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:08.881 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56830", 00:04:08.881 "tpoint_group_mask": "0x8", 00:04:08.881 "iscsi_conn": { 00:04:08.881 "mask": "0x2", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "scsi": { 00:04:08.881 "mask": "0x4", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "bdev": { 00:04:08.881 "mask": "0x8", 00:04:08.881 "tpoint_mask": "0xffffffffffffffff" 00:04:08.881 }, 00:04:08.881 "nvmf_rdma": { 00:04:08.881 "mask": "0x10", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "nvmf_tcp": { 00:04:08.881 "mask": "0x20", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "ftl": { 00:04:08.881 "mask": "0x40", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "blobfs": { 00:04:08.881 "mask": "0x80", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "dsa": { 00:04:08.881 "mask": "0x200", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "thread": { 00:04:08.881 "mask": "0x400", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "nvme_pcie": { 00:04:08.881 "mask": "0x800", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "iaa": { 00:04:08.881 "mask": "0x1000", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "nvme_tcp": { 00:04:08.881 "mask": "0x2000", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "bdev_nvme": { 00:04:08.881 "mask": "0x4000", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "sock": { 00:04:08.881 "mask": "0x8000", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "blob": { 00:04:08.881 "mask": "0x10000", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 }, 00:04:08.881 "bdev_raid": { 00:04:08.881 "mask": "0x20000", 00:04:08.881 "tpoint_mask": "0x0" 00:04:08.881 } 00:04:08.881 }' 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.881 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.140 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.140 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.140 ************************************ 00:04:09.140 END TEST rpc_trace_cmd_test 00:04:09.140 ************************************ 00:04:09.140 21:35:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.140 00:04:09.140 real 0m0.248s 00:04:09.140 user 0m0.190s 00:04:09.140 sys 0m0.045s 00:04:09.140 21:35:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.140 21:35:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.140 21:35:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.140 21:35:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.140 21:35:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.140 21:35:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.140 21:35:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.140 21:35:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.140 ************************************ 00:04:09.140 START TEST rpc_daemon_integrity 00:04:09.141 ************************************ 00:04:09.141 21:35:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:09.141 21:35:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.141 21:35:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.141 21:35:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.141 21:35:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.141 21:35:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.141 { 00:04:09.141 "name": "Malloc2", 00:04:09.141 "aliases": [ 00:04:09.141 "700f229a-edb6-483d-8eee-7f4b9410ab38" 00:04:09.141 ], 00:04:09.141 "product_name": "Malloc disk", 00:04:09.141 "block_size": 512, 00:04:09.141 "num_blocks": 16384, 00:04:09.141 "uuid": "700f229a-edb6-483d-8eee-7f4b9410ab38", 00:04:09.141 "assigned_rate_limits": { 00:04:09.141 "rw_ios_per_sec": 0, 00:04:09.141 "rw_mbytes_per_sec": 0, 00:04:09.141 "r_mbytes_per_sec": 0, 00:04:09.141 "w_mbytes_per_sec": 0 00:04:09.141 }, 00:04:09.141 "claimed": false, 00:04:09.141 "zoned": false, 00:04:09.141 "supported_io_types": { 00:04:09.141 "read": true, 00:04:09.141 "write": true, 00:04:09.141 "unmap": true, 00:04:09.141 "flush": true, 00:04:09.141 "reset": true, 00:04:09.141 "nvme_admin": false, 00:04:09.141 "nvme_io": false, 00:04:09.141 "nvme_io_md": false, 00:04:09.141 "write_zeroes": true, 00:04:09.141 "zcopy": true, 00:04:09.141 "get_zone_info": false, 00:04:09.141 "zone_management": false, 00:04:09.141 "zone_append": false, 00:04:09.141 "compare": false, 00:04:09.141 "compare_and_write": false, 00:04:09.141 "abort": true, 00:04:09.141 "seek_hole": false, 00:04:09.141 "seek_data": false, 00:04:09.141 "copy": true, 00:04:09.141 "nvme_iov_md": false 00:04:09.141 }, 00:04:09.141 "memory_domains": [ 00:04:09.141 { 00:04:09.141 "dma_device_id": "system", 00:04:09.141 "dma_device_type": 1 00:04:09.141 }, 00:04:09.141 { 00:04:09.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.141 "dma_device_type": 2 00:04:09.141 } 00:04:09.141 ], 00:04:09.141 "driver_specific": {} 00:04:09.141 } 00:04:09.141 ]' 00:04:09.141 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.401 [2024-09-29 21:35:28.145600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.401 [2024-09-29 21:35:28.145662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.401 [2024-09-29 21:35:28.145684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:09.401 [2024-09-29 21:35:28.145696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.401 [2024-09-29 21:35:28.148150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.401 Passthru0 00:04:09.401 [2024-09-29 21:35:28.148240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.401 { 00:04:09.401 "name": "Malloc2", 00:04:09.401 "aliases": [ 00:04:09.401 "700f229a-edb6-483d-8eee-7f4b9410ab38" 00:04:09.401 ], 00:04:09.401 "product_name": "Malloc disk", 00:04:09.401 "block_size": 512, 00:04:09.401 "num_blocks": 16384, 00:04:09.401 "uuid": "700f229a-edb6-483d-8eee-7f4b9410ab38", 00:04:09.401 "assigned_rate_limits": { 00:04:09.401 "rw_ios_per_sec": 0, 00:04:09.401 "rw_mbytes_per_sec": 0, 00:04:09.401 "r_mbytes_per_sec": 0, 00:04:09.401 "w_mbytes_per_sec": 0 00:04:09.401 }, 00:04:09.401 "claimed": true, 00:04:09.401 "claim_type": "exclusive_write", 00:04:09.401 "zoned": false, 00:04:09.401 "supported_io_types": { 00:04:09.401 "read": true, 00:04:09.401 "write": true, 00:04:09.401 "unmap": true, 00:04:09.401 "flush": true, 00:04:09.401 "reset": true, 00:04:09.401 "nvme_admin": false, 00:04:09.401 "nvme_io": false, 00:04:09.401 "nvme_io_md": false, 00:04:09.401 "write_zeroes": true, 00:04:09.401 "zcopy": true, 00:04:09.401 "get_zone_info": false, 00:04:09.401 "zone_management": false, 00:04:09.401 "zone_append": false, 00:04:09.401 "compare": false, 00:04:09.401 "compare_and_write": false, 00:04:09.401 "abort": true, 00:04:09.401 "seek_hole": false, 00:04:09.401 "seek_data": false, 00:04:09.401 "copy": true, 00:04:09.401 "nvme_iov_md": false 00:04:09.401 }, 00:04:09.401 "memory_domains": [ 00:04:09.401 { 00:04:09.401 "dma_device_id": "system", 00:04:09.401 "dma_device_type": 1 00:04:09.401 }, 00:04:09.401 { 00:04:09.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.401 "dma_device_type": 2 00:04:09.401 } 00:04:09.401 ], 00:04:09.401 "driver_specific": {} 00:04:09.401 }, 00:04:09.401 { 00:04:09.401 "name": "Passthru0", 00:04:09.401 "aliases": [ 00:04:09.401 "cf1cbc04-9b65-59c9-a388-610262d069b4" 00:04:09.401 ], 00:04:09.401 "product_name": "passthru", 00:04:09.401 "block_size": 512, 00:04:09.401 "num_blocks": 16384, 00:04:09.401 "uuid": "cf1cbc04-9b65-59c9-a388-610262d069b4", 00:04:09.401 "assigned_rate_limits": { 00:04:09.401 "rw_ios_per_sec": 0, 00:04:09.401 "rw_mbytes_per_sec": 0, 00:04:09.401 "r_mbytes_per_sec": 0, 00:04:09.401 "w_mbytes_per_sec": 0 00:04:09.401 }, 00:04:09.401 "claimed": false, 00:04:09.401 "zoned": false, 00:04:09.401 "supported_io_types": { 00:04:09.401 "read": true, 00:04:09.401 "write": true, 00:04:09.401 "unmap": true, 00:04:09.401 "flush": true, 00:04:09.401 "reset": true, 00:04:09.401 "nvme_admin": false, 00:04:09.401 "nvme_io": false, 00:04:09.401 "nvme_io_md": false, 00:04:09.401 "write_zeroes": true, 00:04:09.401 "zcopy": true, 00:04:09.401 "get_zone_info": false, 00:04:09.401 "zone_management": false, 00:04:09.401 "zone_append": false, 00:04:09.401 "compare": false, 00:04:09.401 "compare_and_write": false, 00:04:09.401 "abort": true, 00:04:09.401 "seek_hole": false, 00:04:09.401 "seek_data": false, 00:04:09.401 "copy": true, 00:04:09.401 "nvme_iov_md": false 00:04:09.401 }, 00:04:09.401 "memory_domains": [ 00:04:09.401 { 00:04:09.401 "dma_device_id": "system", 00:04:09.401 "dma_device_type": 1 00:04:09.401 }, 00:04:09.401 { 00:04:09.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.401 "dma_device_type": 2 00:04:09.401 } 00:04:09.401 ], 00:04:09.401 "driver_specific": { 00:04:09.401 "passthru": { 00:04:09.401 "name": "Passthru0", 00:04:09.401 "base_bdev_name": "Malloc2" 00:04:09.401 } 00:04:09.401 } 00:04:09.401 } 00:04:09.401 ]' 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.401 ************************************ 00:04:09.401 END TEST rpc_daemon_integrity 00:04:09.401 ************************************ 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.401 00:04:09.401 real 0m0.368s 00:04:09.401 user 0m0.206s 00:04:09.401 sys 0m0.058s 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.401 21:35:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.661 21:35:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.661 21:35:28 rpc -- rpc/rpc.sh@84 -- # killprocess 56830 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@950 -- # '[' -z 56830 ']' 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@954 -- # kill -0 56830 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@955 -- # uname 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56830 00:04:09.661 killing process with pid 56830 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56830' 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@969 -- # kill 56830 00:04:09.661 21:35:28 rpc -- common/autotest_common.sh@974 -- # wait 56830 00:04:12.198 00:04:12.198 real 0m5.813s 00:04:12.198 user 0m6.114s 00:04:12.198 sys 0m1.126s 00:04:12.198 ************************************ 00:04:12.198 END TEST rpc 00:04:12.198 ************************************ 00:04:12.198 21:35:31 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.198 21:35:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.198 21:35:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:12.198 21:35:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.198 21:35:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.198 21:35:31 -- common/autotest_common.sh@10 -- # set +x 00:04:12.198 ************************************ 00:04:12.198 START TEST skip_rpc 00:04:12.198 ************************************ 00:04:12.198 21:35:31 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:12.457 * Looking for test storage... 00:04:12.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:12.457 21:35:31 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:12.457 21:35:31 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:12.457 21:35:31 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:12.457 21:35:31 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:12.457 21:35:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:12.457 21:35:31 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:12.457 21:35:31 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:12.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.457 --rc genhtml_branch_coverage=1 00:04:12.457 --rc genhtml_function_coverage=1 00:04:12.457 --rc genhtml_legend=1 00:04:12.457 --rc geninfo_all_blocks=1 00:04:12.457 --rc geninfo_unexecuted_blocks=1 00:04:12.457 00:04:12.457 ' 00:04:12.457 21:35:31 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:12.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.457 --rc genhtml_branch_coverage=1 00:04:12.457 --rc genhtml_function_coverage=1 00:04:12.457 --rc genhtml_legend=1 00:04:12.457 --rc geninfo_all_blocks=1 00:04:12.457 --rc geninfo_unexecuted_blocks=1 00:04:12.457 00:04:12.457 ' 00:04:12.457 21:35:31 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:12.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.457 --rc genhtml_branch_coverage=1 00:04:12.457 --rc genhtml_function_coverage=1 00:04:12.458 --rc genhtml_legend=1 00:04:12.458 --rc geninfo_all_blocks=1 00:04:12.458 --rc geninfo_unexecuted_blocks=1 00:04:12.458 00:04:12.458 ' 00:04:12.458 21:35:31 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:12.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:12.458 --rc genhtml_branch_coverage=1 00:04:12.458 --rc genhtml_function_coverage=1 00:04:12.458 --rc genhtml_legend=1 00:04:12.458 --rc geninfo_all_blocks=1 00:04:12.458 --rc geninfo_unexecuted_blocks=1 00:04:12.458 00:04:12.458 ' 00:04:12.458 21:35:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:12.458 21:35:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:12.458 21:35:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:12.458 21:35:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.458 21:35:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.458 21:35:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.458 ************************************ 00:04:12.458 START TEST skip_rpc 00:04:12.458 ************************************ 00:04:12.458 21:35:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:12.458 21:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57070 00:04:12.458 21:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:12.458 21:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.458 21:35:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:12.717 [2024-09-29 21:35:31.501224] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:12.717 [2024-09-29 21:35:31.501412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57070 ] 00:04:12.717 [2024-09-29 21:35:31.663973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.976 [2024-09-29 21:35:31.913458] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57070 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57070 ']' 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57070 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:18.252 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57070 00:04:18.252 killing process with pid 57070 00:04:18.253 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:18.253 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:18.253 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57070' 00:04:18.253 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57070 00:04:18.253 21:35:36 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57070 00:04:20.163 00:04:20.163 real 0m7.736s 00:04:20.163 user 0m7.090s 00:04:20.163 sys 0m0.565s 00:04:20.163 ************************************ 00:04:20.163 END TEST skip_rpc 00:04:20.163 ************************************ 00:04:20.163 21:35:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.163 21:35:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.422 21:35:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:20.422 21:35:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.422 21:35:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.422 21:35:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.422 ************************************ 00:04:20.422 START TEST skip_rpc_with_json 00:04:20.422 ************************************ 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57174 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57174 00:04:20.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57174 ']' 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.422 21:35:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.422 [2024-09-29 21:35:39.317368] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:20.422 [2024-09-29 21:35:39.317588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57174 ] 00:04:20.682 [2024-09-29 21:35:39.482591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.941 [2024-09-29 21:35:39.724777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.882 [2024-09-29 21:35:40.711083] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:21.882 request: 00:04:21.882 { 00:04:21.882 "trtype": "tcp", 00:04:21.882 "method": "nvmf_get_transports", 00:04:21.882 "req_id": 1 00:04:21.882 } 00:04:21.882 Got JSON-RPC error response 00:04:21.882 response: 00:04:21.882 { 00:04:21.882 "code": -19, 00:04:21.882 "message": "No such device" 00:04:21.882 } 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.882 [2024-09-29 21:35:40.727187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:21.882 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.142 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:22.142 21:35:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.142 { 00:04:22.142 "subsystems": [ 00:04:22.142 { 00:04:22.142 "subsystem": "fsdev", 00:04:22.142 "config": [ 00:04:22.142 { 00:04:22.142 "method": "fsdev_set_opts", 00:04:22.142 "params": { 00:04:22.142 "fsdev_io_pool_size": 65535, 00:04:22.142 "fsdev_io_cache_size": 256 00:04:22.142 } 00:04:22.142 } 00:04:22.142 ] 00:04:22.142 }, 00:04:22.142 { 00:04:22.142 "subsystem": "keyring", 00:04:22.142 "config": [] 00:04:22.142 }, 00:04:22.142 { 00:04:22.142 "subsystem": "iobuf", 00:04:22.142 "config": [ 00:04:22.142 { 00:04:22.142 "method": "iobuf_set_options", 00:04:22.142 "params": { 00:04:22.142 "small_pool_count": 8192, 00:04:22.142 "large_pool_count": 1024, 00:04:22.142 "small_bufsize": 8192, 00:04:22.142 "large_bufsize": 135168 00:04:22.142 } 00:04:22.142 } 00:04:22.142 ] 00:04:22.142 }, 00:04:22.142 { 00:04:22.142 "subsystem": "sock", 00:04:22.142 "config": [ 00:04:22.142 { 00:04:22.142 "method": "sock_set_default_impl", 00:04:22.142 "params": { 00:04:22.142 "impl_name": "posix" 00:04:22.142 } 00:04:22.142 }, 00:04:22.142 { 00:04:22.142 "method": "sock_impl_set_options", 00:04:22.142 "params": { 00:04:22.142 "impl_name": "ssl", 00:04:22.142 "recv_buf_size": 4096, 00:04:22.142 "send_buf_size": 4096, 00:04:22.143 "enable_recv_pipe": true, 00:04:22.143 "enable_quickack": false, 00:04:22.143 "enable_placement_id": 0, 00:04:22.143 "enable_zerocopy_send_server": true, 00:04:22.143 "enable_zerocopy_send_client": false, 00:04:22.143 "zerocopy_threshold": 0, 00:04:22.143 "tls_version": 0, 00:04:22.143 "enable_ktls": false 00:04:22.143 } 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "method": "sock_impl_set_options", 00:04:22.143 "params": { 00:04:22.143 "impl_name": "posix", 00:04:22.143 "recv_buf_size": 2097152, 00:04:22.143 "send_buf_size": 2097152, 00:04:22.143 "enable_recv_pipe": true, 00:04:22.143 "enable_quickack": false, 00:04:22.143 "enable_placement_id": 0, 00:04:22.143 "enable_zerocopy_send_server": true, 00:04:22.143 "enable_zerocopy_send_client": false, 00:04:22.143 "zerocopy_threshold": 0, 00:04:22.143 "tls_version": 0, 00:04:22.143 "enable_ktls": false 00:04:22.143 } 00:04:22.143 } 00:04:22.143 ] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "vmd", 00:04:22.143 "config": [] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "accel", 00:04:22.143 "config": [ 00:04:22.143 { 00:04:22.143 "method": "accel_set_options", 00:04:22.143 "params": { 00:04:22.143 "small_cache_size": 128, 00:04:22.143 "large_cache_size": 16, 00:04:22.143 "task_count": 2048, 00:04:22.143 "sequence_count": 2048, 00:04:22.143 "buf_count": 2048 00:04:22.143 } 00:04:22.143 } 00:04:22.143 ] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "bdev", 00:04:22.143 "config": [ 00:04:22.143 { 00:04:22.143 "method": "bdev_set_options", 00:04:22.143 "params": { 00:04:22.143 "bdev_io_pool_size": 65535, 00:04:22.143 "bdev_io_cache_size": 256, 00:04:22.143 "bdev_auto_examine": true, 00:04:22.143 "iobuf_small_cache_size": 128, 00:04:22.143 "iobuf_large_cache_size": 16 00:04:22.143 } 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "method": "bdev_raid_set_options", 00:04:22.143 "params": { 00:04:22.143 "process_window_size_kb": 1024, 00:04:22.143 "process_max_bandwidth_mb_sec": 0 00:04:22.143 } 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "method": "bdev_iscsi_set_options", 00:04:22.143 "params": { 00:04:22.143 "timeout_sec": 30 00:04:22.143 } 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "method": "bdev_nvme_set_options", 00:04:22.143 "params": { 00:04:22.143 "action_on_timeout": "none", 00:04:22.143 "timeout_us": 0, 00:04:22.143 "timeout_admin_us": 0, 00:04:22.143 "keep_alive_timeout_ms": 10000, 00:04:22.143 "arbitration_burst": 0, 00:04:22.143 "low_priority_weight": 0, 00:04:22.143 "medium_priority_weight": 0, 00:04:22.143 "high_priority_weight": 0, 00:04:22.143 "nvme_adminq_poll_period_us": 10000, 00:04:22.143 "nvme_ioq_poll_period_us": 0, 00:04:22.143 "io_queue_requests": 0, 00:04:22.143 "delay_cmd_submit": true, 00:04:22.143 "transport_retry_count": 4, 00:04:22.143 "bdev_retry_count": 3, 00:04:22.143 "transport_ack_timeout": 0, 00:04:22.143 "ctrlr_loss_timeout_sec": 0, 00:04:22.143 "reconnect_delay_sec": 0, 00:04:22.143 "fast_io_fail_timeout_sec": 0, 00:04:22.143 "disable_auto_failback": false, 00:04:22.143 "generate_uuids": false, 00:04:22.143 "transport_tos": 0, 00:04:22.143 "nvme_error_stat": false, 00:04:22.143 "rdma_srq_size": 0, 00:04:22.143 "io_path_stat": false, 00:04:22.143 "allow_accel_sequence": false, 00:04:22.143 "rdma_max_cq_size": 0, 00:04:22.143 "rdma_cm_event_timeout_ms": 0, 00:04:22.143 "dhchap_digests": [ 00:04:22.143 "sha256", 00:04:22.143 "sha384", 00:04:22.143 "sha512" 00:04:22.143 ], 00:04:22.143 "dhchap_dhgroups": [ 00:04:22.143 "null", 00:04:22.143 "ffdhe2048", 00:04:22.143 "ffdhe3072", 00:04:22.143 "ffdhe4096", 00:04:22.143 "ffdhe6144", 00:04:22.143 "ffdhe8192" 00:04:22.143 ] 00:04:22.143 } 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "method": "bdev_nvme_set_hotplug", 00:04:22.143 "params": { 00:04:22.143 "period_us": 100000, 00:04:22.143 "enable": false 00:04:22.143 } 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "method": "bdev_wait_for_examine" 00:04:22.143 } 00:04:22.143 ] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "scsi", 00:04:22.143 "config": null 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "scheduler", 00:04:22.143 "config": [ 00:04:22.143 { 00:04:22.143 "method": "framework_set_scheduler", 00:04:22.143 "params": { 00:04:22.143 "name": "static" 00:04:22.143 } 00:04:22.143 } 00:04:22.143 ] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "vhost_scsi", 00:04:22.143 "config": [] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "vhost_blk", 00:04:22.143 "config": [] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "ublk", 00:04:22.143 "config": [] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "nbd", 00:04:22.143 "config": [] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "nvmf", 00:04:22.143 "config": [ 00:04:22.143 { 00:04:22.143 "method": "nvmf_set_config", 00:04:22.143 "params": { 00:04:22.143 "discovery_filter": "match_any", 00:04:22.143 "admin_cmd_passthru": { 00:04:22.143 "identify_ctrlr": false 00:04:22.143 }, 00:04:22.143 "dhchap_digests": [ 00:04:22.143 "sha256", 00:04:22.143 "sha384", 00:04:22.143 "sha512" 00:04:22.143 ], 00:04:22.143 "dhchap_dhgroups": [ 00:04:22.143 "null", 00:04:22.143 "ffdhe2048", 00:04:22.143 "ffdhe3072", 00:04:22.143 "ffdhe4096", 00:04:22.143 "ffdhe6144", 00:04:22.143 "ffdhe8192" 00:04:22.143 ] 00:04:22.143 } 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "method": "nvmf_set_max_subsystems", 00:04:22.143 "params": { 00:04:22.143 "max_subsystems": 1024 00:04:22.143 } 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "method": "nvmf_set_crdt", 00:04:22.143 "params": { 00:04:22.143 "crdt1": 0, 00:04:22.143 "crdt2": 0, 00:04:22.143 "crdt3": 0 00:04:22.143 } 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "method": "nvmf_create_transport", 00:04:22.143 "params": { 00:04:22.143 "trtype": "TCP", 00:04:22.143 "max_queue_depth": 128, 00:04:22.143 "max_io_qpairs_per_ctrlr": 127, 00:04:22.143 "in_capsule_data_size": 4096, 00:04:22.143 "max_io_size": 131072, 00:04:22.143 "io_unit_size": 131072, 00:04:22.143 "max_aq_depth": 128, 00:04:22.143 "num_shared_buffers": 511, 00:04:22.143 "buf_cache_size": 4294967295, 00:04:22.143 "dif_insert_or_strip": false, 00:04:22.143 "zcopy": false, 00:04:22.143 "c2h_success": true, 00:04:22.143 "sock_priority": 0, 00:04:22.143 "abort_timeout_sec": 1, 00:04:22.143 "ack_timeout": 0, 00:04:22.143 "data_wr_pool_size": 0 00:04:22.143 } 00:04:22.143 } 00:04:22.143 ] 00:04:22.143 }, 00:04:22.143 { 00:04:22.143 "subsystem": "iscsi", 00:04:22.143 "config": [ 00:04:22.143 { 00:04:22.143 "method": "iscsi_set_options", 00:04:22.143 "params": { 00:04:22.143 "node_base": "iqn.2016-06.io.spdk", 00:04:22.143 "max_sessions": 128, 00:04:22.143 "max_connections_per_session": 2, 00:04:22.143 "max_queue_depth": 64, 00:04:22.143 "default_time2wait": 2, 00:04:22.143 "default_time2retain": 20, 00:04:22.143 "first_burst_length": 8192, 00:04:22.143 "immediate_data": true, 00:04:22.143 "allow_duplicated_isid": false, 00:04:22.143 "error_recovery_level": 0, 00:04:22.143 "nop_timeout": 60, 00:04:22.143 "nop_in_interval": 30, 00:04:22.143 "disable_chap": false, 00:04:22.143 "require_chap": false, 00:04:22.143 "mutual_chap": false, 00:04:22.143 "chap_group": 0, 00:04:22.143 "max_large_datain_per_connection": 64, 00:04:22.143 "max_r2t_per_connection": 4, 00:04:22.143 "pdu_pool_size": 36864, 00:04:22.143 "immediate_data_pool_size": 16384, 00:04:22.143 "data_out_pool_size": 2048 00:04:22.143 } 00:04:22.143 } 00:04:22.143 ] 00:04:22.143 } 00:04:22.143 ] 00:04:22.143 } 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57174 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57174 ']' 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57174 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57174 00:04:22.143 killing process with pid 57174 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57174' 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57174 00:04:22.143 21:35:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57174 00:04:24.729 21:35:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.729 21:35:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57240 00:04:24.729 21:35:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57240 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57240 ']' 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57240 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57240 00:04:30.008 killing process with pid 57240 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57240' 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57240 00:04:30.008 21:35:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57240 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:32.545 ************************************ 00:04:32.545 END TEST skip_rpc_with_json 00:04:32.545 ************************************ 00:04:32.545 00:04:32.545 real 0m12.092s 00:04:32.545 user 0m11.152s 00:04:32.545 sys 0m1.207s 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.545 21:35:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:32.545 21:35:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.545 21:35:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.545 21:35:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.545 ************************************ 00:04:32.545 START TEST skip_rpc_with_delay 00:04:32.545 ************************************ 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:32.545 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:32.545 [2024-09-29 21:35:51.501902] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:32.545 [2024-09-29 21:35:51.502602] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:32.805 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:32.805 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:32.805 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:32.805 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:32.805 00:04:32.805 real 0m0.185s 00:04:32.805 user 0m0.106s 00:04:32.805 sys 0m0.076s 00:04:32.805 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.805 ************************************ 00:04:32.805 END TEST skip_rpc_with_delay 00:04:32.805 ************************************ 00:04:32.805 21:35:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:32.805 21:35:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:32.805 21:35:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:32.805 21:35:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:32.805 21:35:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.805 21:35:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.805 21:35:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.805 ************************************ 00:04:32.805 START TEST exit_on_failed_rpc_init 00:04:32.805 ************************************ 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57369 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57369 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57369 ']' 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.805 21:35:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.805 [2024-09-29 21:35:51.756556] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:32.805 [2024-09-29 21:35:51.756779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57369 ] 00:04:33.065 [2024-09-29 21:35:51.923431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.326 [2024-09-29 21:35:52.178561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:34.265 21:35:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.525 [2024-09-29 21:35:53.286101] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:34.525 [2024-09-29 21:35:53.286327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57398 ] 00:04:34.525 [2024-09-29 21:35:53.449612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.784 [2024-09-29 21:35:53.657299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.784 [2024-09-29 21:35:53.657520] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:34.784 [2024-09-29 21:35:53.657588] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:34.784 [2024-09-29 21:35:53.657623] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57369 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57369 ']' 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57369 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57369 00:04:35.353 killing process with pid 57369 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57369' 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57369 00:04:35.353 21:35:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57369 00:04:37.893 00:04:37.893 real 0m5.093s 00:04:37.893 user 0m5.454s 00:04:37.893 sys 0m0.752s 00:04:37.893 ************************************ 00:04:37.893 END TEST exit_on_failed_rpc_init 00:04:37.893 ************************************ 00:04:37.893 21:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.893 21:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.893 21:35:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.893 00:04:37.893 real 0m25.620s 00:04:37.893 user 0m24.018s 00:04:37.893 sys 0m2.909s 00:04:37.893 21:35:56 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.893 ************************************ 00:04:37.893 END TEST skip_rpc 00:04:37.893 ************************************ 00:04:37.893 21:35:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.893 21:35:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:37.893 21:35:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.893 21:35:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.893 21:35:56 -- common/autotest_common.sh@10 -- # set +x 00:04:37.893 ************************************ 00:04:37.893 START TEST rpc_client 00:04:37.893 ************************************ 00:04:37.893 21:35:56 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:38.153 * Looking for test storage... 00:04:38.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:38.153 21:35:56 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:38.153 21:35:56 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:04:38.153 21:35:56 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:38.153 21:35:57 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.153 21:35:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:38.153 21:35:57 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.153 21:35:57 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.153 --rc genhtml_branch_coverage=1 00:04:38.153 --rc genhtml_function_coverage=1 00:04:38.153 --rc genhtml_legend=1 00:04:38.153 --rc geninfo_all_blocks=1 00:04:38.153 --rc geninfo_unexecuted_blocks=1 00:04:38.153 00:04:38.153 ' 00:04:38.153 21:35:57 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.153 --rc genhtml_branch_coverage=1 00:04:38.153 --rc genhtml_function_coverage=1 00:04:38.153 --rc genhtml_legend=1 00:04:38.153 --rc geninfo_all_blocks=1 00:04:38.153 --rc geninfo_unexecuted_blocks=1 00:04:38.153 00:04:38.153 ' 00:04:38.153 21:35:57 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.153 --rc genhtml_branch_coverage=1 00:04:38.153 --rc genhtml_function_coverage=1 00:04:38.153 --rc genhtml_legend=1 00:04:38.153 --rc geninfo_all_blocks=1 00:04:38.153 --rc geninfo_unexecuted_blocks=1 00:04:38.153 00:04:38.153 ' 00:04:38.153 21:35:57 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.153 --rc genhtml_branch_coverage=1 00:04:38.153 --rc genhtml_function_coverage=1 00:04:38.153 --rc genhtml_legend=1 00:04:38.154 --rc geninfo_all_blocks=1 00:04:38.154 --rc geninfo_unexecuted_blocks=1 00:04:38.154 00:04:38.154 ' 00:04:38.154 21:35:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:38.154 OK 00:04:38.154 21:35:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:38.154 00:04:38.154 real 0m0.275s 00:04:38.154 user 0m0.150s 00:04:38.154 sys 0m0.143s 00:04:38.154 21:35:57 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.154 21:35:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:38.154 ************************************ 00:04:38.154 END TEST rpc_client 00:04:38.154 ************************************ 00:04:38.414 21:35:57 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:38.414 21:35:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.414 21:35:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.414 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.414 ************************************ 00:04:38.414 START TEST json_config 00:04:38.414 ************************************ 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:38.414 21:35:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.414 21:35:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.414 21:35:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.414 21:35:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.414 21:35:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.414 21:35:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.414 21:35:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.414 21:35:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.414 21:35:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.414 21:35:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.414 21:35:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.414 21:35:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:38.414 21:35:57 json_config -- scripts/common.sh@345 -- # : 1 00:04:38.414 21:35:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.414 21:35:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.414 21:35:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:38.414 21:35:57 json_config -- scripts/common.sh@353 -- # local d=1 00:04:38.414 21:35:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.414 21:35:57 json_config -- scripts/common.sh@355 -- # echo 1 00:04:38.414 21:35:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.414 21:35:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:38.414 21:35:57 json_config -- scripts/common.sh@353 -- # local d=2 00:04:38.414 21:35:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.414 21:35:57 json_config -- scripts/common.sh@355 -- # echo 2 00:04:38.414 21:35:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.414 21:35:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.414 21:35:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.414 21:35:57 json_config -- scripts/common.sh@368 -- # return 0 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:38.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.414 --rc genhtml_branch_coverage=1 00:04:38.414 --rc genhtml_function_coverage=1 00:04:38.414 --rc genhtml_legend=1 00:04:38.414 --rc geninfo_all_blocks=1 00:04:38.414 --rc geninfo_unexecuted_blocks=1 00:04:38.414 00:04:38.414 ' 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:38.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.414 --rc genhtml_branch_coverage=1 00:04:38.414 --rc genhtml_function_coverage=1 00:04:38.414 --rc genhtml_legend=1 00:04:38.414 --rc geninfo_all_blocks=1 00:04:38.414 --rc geninfo_unexecuted_blocks=1 00:04:38.414 00:04:38.414 ' 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:38.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.414 --rc genhtml_branch_coverage=1 00:04:38.414 --rc genhtml_function_coverage=1 00:04:38.414 --rc genhtml_legend=1 00:04:38.414 --rc geninfo_all_blocks=1 00:04:38.414 --rc geninfo_unexecuted_blocks=1 00:04:38.414 00:04:38.414 ' 00:04:38.414 21:35:57 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:38.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.414 --rc genhtml_branch_coverage=1 00:04:38.414 --rc genhtml_function_coverage=1 00:04:38.414 --rc genhtml_legend=1 00:04:38.414 --rc geninfo_all_blocks=1 00:04:38.414 --rc geninfo_unexecuted_blocks=1 00:04:38.414 00:04:38.414 ' 00:04:38.414 21:35:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.414 21:35:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.674 21:35:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5370061d-ca0e-42cc-a5d6-16f235e3b196 00:04:38.674 21:35:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5370061d-ca0e-42cc-a5d6-16f235e3b196 00:04:38.674 21:35:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.674 21:35:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.674 21:35:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.674 21:35:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.674 21:35:57 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:38.674 21:35:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.674 21:35:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.674 21:35:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.674 21:35:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.674 21:35:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.674 21:35:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.674 21:35:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.674 21:35:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:38.674 21:35:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@51 -- # : 0 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.675 21:35:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.675 21:35:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:38.675 21:35:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:38.675 21:35:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:38.675 WARNING: No tests are enabled so not running JSON configuration tests 00:04:38.675 21:35:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:38.675 21:35:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:38.675 21:35:57 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:38.675 21:35:57 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:38.675 ************************************ 00:04:38.675 END TEST json_config 00:04:38.675 ************************************ 00:04:38.675 00:04:38.675 real 0m0.226s 00:04:38.675 user 0m0.143s 00:04:38.675 sys 0m0.088s 00:04:38.675 21:35:57 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.675 21:35:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.675 21:35:57 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:38.675 21:35:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.675 21:35:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.675 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:04:38.675 ************************************ 00:04:38.675 START TEST json_config_extra_key 00:04:38.675 ************************************ 00:04:38.675 21:35:57 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:38.675 21:35:57 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:38.675 21:35:57 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:04:38.675 21:35:57 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:38.675 21:35:57 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.675 21:35:57 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.935 21:35:57 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:38.935 21:35:57 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.935 21:35:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:38.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.935 --rc genhtml_branch_coverage=1 00:04:38.935 --rc genhtml_function_coverage=1 00:04:38.935 --rc genhtml_legend=1 00:04:38.935 --rc geninfo_all_blocks=1 00:04:38.935 --rc geninfo_unexecuted_blocks=1 00:04:38.935 00:04:38.935 ' 00:04:38.935 21:35:57 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:38.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.935 --rc genhtml_branch_coverage=1 00:04:38.935 --rc genhtml_function_coverage=1 00:04:38.935 --rc genhtml_legend=1 00:04:38.935 --rc geninfo_all_blocks=1 00:04:38.935 --rc geninfo_unexecuted_blocks=1 00:04:38.935 00:04:38.935 ' 00:04:38.935 21:35:57 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:38.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.935 --rc genhtml_branch_coverage=1 00:04:38.935 --rc genhtml_function_coverage=1 00:04:38.935 --rc genhtml_legend=1 00:04:38.935 --rc geninfo_all_blocks=1 00:04:38.935 --rc geninfo_unexecuted_blocks=1 00:04:38.935 00:04:38.935 ' 00:04:38.935 21:35:57 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:38.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.935 --rc genhtml_branch_coverage=1 00:04:38.935 --rc genhtml_function_coverage=1 00:04:38.935 --rc genhtml_legend=1 00:04:38.935 --rc geninfo_all_blocks=1 00:04:38.935 --rc geninfo_unexecuted_blocks=1 00:04:38.935 00:04:38.935 ' 00:04:38.935 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:38.935 21:35:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5370061d-ca0e-42cc-a5d6-16f235e3b196 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5370061d-ca0e-42cc-a5d6-16f235e3b196 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:38.936 21:35:57 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:38.936 21:35:57 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:38.936 21:35:57 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:38.936 21:35:57 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:38.936 21:35:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.936 21:35:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.936 21:35:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.936 21:35:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:38.936 21:35:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:38.936 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:38.936 21:35:57 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:38.936 INFO: launching applications... 00:04:38.936 21:35:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57608 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.936 Waiting for target to run... 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57608 /var/tmp/spdk_tgt.sock 00:04:38.936 21:35:57 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:38.936 21:35:57 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57608 ']' 00:04:38.936 21:35:57 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.936 21:35:57 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:38.936 21:35:57 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.936 21:35:57 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:38.936 21:35:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.936 [2024-09-29 21:35:57.825474] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:38.936 [2024-09-29 21:35:57.825701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57608 ] 00:04:39.504 [2024-09-29 21:35:58.383889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.763 [2024-09-29 21:35:58.599899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.332 00:04:40.332 INFO: shutting down applications... 00:04:40.332 21:35:59 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.332 21:35:59 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:40.332 21:35:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:40.332 21:35:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:40.332 21:35:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:40.332 21:35:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:40.332 21:35:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:40.332 21:35:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57608 ]] 00:04:40.332 21:35:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57608 00:04:40.332 21:35:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:40.332 21:35:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.332 21:35:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57608 00:04:40.332 21:35:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.908 21:35:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.908 21:35:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.908 21:35:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57608 00:04:40.908 21:35:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.492 21:36:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.492 21:36:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.492 21:36:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57608 00:04:41.492 21:36:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.062 21:36:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.062 21:36:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.062 21:36:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57608 00:04:42.062 21:36:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.633 21:36:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.633 21:36:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.633 21:36:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57608 00:04:42.633 21:36:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.891 21:36:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.891 21:36:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.891 21:36:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57608 00:04:42.891 21:36:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.461 21:36:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.461 21:36:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.461 SPDK target shutdown done 00:04:43.461 Success 00:04:43.461 21:36:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57608 00:04:43.461 21:36:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.461 21:36:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:43.461 21:36:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.461 21:36:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.461 21:36:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.461 00:04:43.461 real 0m4.862s 00:04:43.461 user 0m4.467s 00:04:43.461 sys 0m0.790s 00:04:43.461 21:36:02 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.461 21:36:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.461 ************************************ 00:04:43.461 END TEST json_config_extra_key 00:04:43.461 ************************************ 00:04:43.461 21:36:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.461 21:36:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.461 21:36:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.461 21:36:02 -- common/autotest_common.sh@10 -- # set +x 00:04:43.461 ************************************ 00:04:43.461 START TEST alias_rpc 00:04:43.461 ************************************ 00:04:43.461 21:36:02 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.722 * Looking for test storage... 00:04:43.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.722 21:36:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.722 --rc genhtml_branch_coverage=1 00:04:43.722 --rc genhtml_function_coverage=1 00:04:43.722 --rc genhtml_legend=1 00:04:43.722 --rc geninfo_all_blocks=1 00:04:43.722 --rc geninfo_unexecuted_blocks=1 00:04:43.722 00:04:43.722 ' 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.722 --rc genhtml_branch_coverage=1 00:04:43.722 --rc genhtml_function_coverage=1 00:04:43.722 --rc genhtml_legend=1 00:04:43.722 --rc geninfo_all_blocks=1 00:04:43.722 --rc geninfo_unexecuted_blocks=1 00:04:43.722 00:04:43.722 ' 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.722 --rc genhtml_branch_coverage=1 00:04:43.722 --rc genhtml_function_coverage=1 00:04:43.722 --rc genhtml_legend=1 00:04:43.722 --rc geninfo_all_blocks=1 00:04:43.722 --rc geninfo_unexecuted_blocks=1 00:04:43.722 00:04:43.722 ' 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.722 --rc genhtml_branch_coverage=1 00:04:43.722 --rc genhtml_function_coverage=1 00:04:43.722 --rc genhtml_legend=1 00:04:43.722 --rc geninfo_all_blocks=1 00:04:43.722 --rc geninfo_unexecuted_blocks=1 00:04:43.722 00:04:43.722 ' 00:04:43.722 21:36:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.722 21:36:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.722 21:36:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57727 00:04:43.722 21:36:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57727 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57727 ']' 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.722 21:36:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.982 [2024-09-29 21:36:02.742357] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:43.982 [2024-09-29 21:36:02.742605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57727 ] 00:04:43.982 [2024-09-29 21:36:02.909601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.241 [2024-09-29 21:36:03.147466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.179 21:36:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:45.179 21:36:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:45.179 21:36:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:45.439 21:36:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57727 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57727 ']' 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57727 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57727 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57727' 00:04:45.439 killing process with pid 57727 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@969 -- # kill 57727 00:04:45.439 21:36:04 alias_rpc -- common/autotest_common.sh@974 -- # wait 57727 00:04:48.732 ************************************ 00:04:48.732 END TEST alias_rpc 00:04:48.732 ************************************ 00:04:48.732 00:04:48.732 real 0m4.653s 00:04:48.732 user 0m4.433s 00:04:48.732 sys 0m0.763s 00:04:48.732 21:36:07 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.732 21:36:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.732 21:36:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:48.732 21:36:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:48.732 21:36:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:48.732 21:36:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.732 21:36:07 -- common/autotest_common.sh@10 -- # set +x 00:04:48.732 ************************************ 00:04:48.732 START TEST spdkcli_tcp 00:04:48.732 ************************************ 00:04:48.732 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:48.732 * Looking for test storage... 00:04:48.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:48.732 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:48.732 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:04:48.732 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:48.732 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.732 21:36:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:48.732 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.732 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:48.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.732 --rc genhtml_branch_coverage=1 00:04:48.732 --rc genhtml_function_coverage=1 00:04:48.732 --rc genhtml_legend=1 00:04:48.732 --rc geninfo_all_blocks=1 00:04:48.733 --rc geninfo_unexecuted_blocks=1 00:04:48.733 00:04:48.733 ' 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:48.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.733 --rc genhtml_branch_coverage=1 00:04:48.733 --rc genhtml_function_coverage=1 00:04:48.733 --rc genhtml_legend=1 00:04:48.733 --rc geninfo_all_blocks=1 00:04:48.733 --rc geninfo_unexecuted_blocks=1 00:04:48.733 00:04:48.733 ' 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:48.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.733 --rc genhtml_branch_coverage=1 00:04:48.733 --rc genhtml_function_coverage=1 00:04:48.733 --rc genhtml_legend=1 00:04:48.733 --rc geninfo_all_blocks=1 00:04:48.733 --rc geninfo_unexecuted_blocks=1 00:04:48.733 00:04:48.733 ' 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:48.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.733 --rc genhtml_branch_coverage=1 00:04:48.733 --rc genhtml_function_coverage=1 00:04:48.733 --rc genhtml_legend=1 00:04:48.733 --rc geninfo_all_blocks=1 00:04:48.733 --rc geninfo_unexecuted_blocks=1 00:04:48.733 00:04:48.733 ' 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57840 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57840 00:04:48.733 21:36:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57840 ']' 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:48.733 21:36:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.733 [2024-09-29 21:36:07.471531] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:48.733 [2024-09-29 21:36:07.471718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57840 ] 00:04:48.733 [2024-09-29 21:36:07.632727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.993 [2024-09-29 21:36:07.881423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.993 [2024-09-29 21:36:07.881465] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.933 21:36:08 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.933 21:36:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:49.933 21:36:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57857 00:04:49.933 21:36:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:49.933 21:36:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:50.194 [ 00:04:50.194 "bdev_malloc_delete", 00:04:50.194 "bdev_malloc_create", 00:04:50.194 "bdev_null_resize", 00:04:50.194 "bdev_null_delete", 00:04:50.194 "bdev_null_create", 00:04:50.194 "bdev_nvme_cuse_unregister", 00:04:50.194 "bdev_nvme_cuse_register", 00:04:50.194 "bdev_opal_new_user", 00:04:50.194 "bdev_opal_set_lock_state", 00:04:50.194 "bdev_opal_delete", 00:04:50.194 "bdev_opal_get_info", 00:04:50.194 "bdev_opal_create", 00:04:50.194 "bdev_nvme_opal_revert", 00:04:50.194 "bdev_nvme_opal_init", 00:04:50.194 "bdev_nvme_send_cmd", 00:04:50.194 "bdev_nvme_set_keys", 00:04:50.194 "bdev_nvme_get_path_iostat", 00:04:50.194 "bdev_nvme_get_mdns_discovery_info", 00:04:50.194 "bdev_nvme_stop_mdns_discovery", 00:04:50.194 "bdev_nvme_start_mdns_discovery", 00:04:50.194 "bdev_nvme_set_multipath_policy", 00:04:50.194 "bdev_nvme_set_preferred_path", 00:04:50.194 "bdev_nvme_get_io_paths", 00:04:50.194 "bdev_nvme_remove_error_injection", 00:04:50.194 "bdev_nvme_add_error_injection", 00:04:50.194 "bdev_nvme_get_discovery_info", 00:04:50.194 "bdev_nvme_stop_discovery", 00:04:50.194 "bdev_nvme_start_discovery", 00:04:50.194 "bdev_nvme_get_controller_health_info", 00:04:50.194 "bdev_nvme_disable_controller", 00:04:50.194 "bdev_nvme_enable_controller", 00:04:50.194 "bdev_nvme_reset_controller", 00:04:50.194 "bdev_nvme_get_transport_statistics", 00:04:50.194 "bdev_nvme_apply_firmware", 00:04:50.194 "bdev_nvme_detach_controller", 00:04:50.194 "bdev_nvme_get_controllers", 00:04:50.194 "bdev_nvme_attach_controller", 00:04:50.194 "bdev_nvme_set_hotplug", 00:04:50.194 "bdev_nvme_set_options", 00:04:50.194 "bdev_passthru_delete", 00:04:50.194 "bdev_passthru_create", 00:04:50.194 "bdev_lvol_set_parent_bdev", 00:04:50.194 "bdev_lvol_set_parent", 00:04:50.194 "bdev_lvol_check_shallow_copy", 00:04:50.194 "bdev_lvol_start_shallow_copy", 00:04:50.194 "bdev_lvol_grow_lvstore", 00:04:50.194 "bdev_lvol_get_lvols", 00:04:50.194 "bdev_lvol_get_lvstores", 00:04:50.194 "bdev_lvol_delete", 00:04:50.194 "bdev_lvol_set_read_only", 00:04:50.194 "bdev_lvol_resize", 00:04:50.194 "bdev_lvol_decouple_parent", 00:04:50.194 "bdev_lvol_inflate", 00:04:50.194 "bdev_lvol_rename", 00:04:50.194 "bdev_lvol_clone_bdev", 00:04:50.194 "bdev_lvol_clone", 00:04:50.194 "bdev_lvol_snapshot", 00:04:50.194 "bdev_lvol_create", 00:04:50.194 "bdev_lvol_delete_lvstore", 00:04:50.194 "bdev_lvol_rename_lvstore", 00:04:50.194 "bdev_lvol_create_lvstore", 00:04:50.194 "bdev_raid_set_options", 00:04:50.194 "bdev_raid_remove_base_bdev", 00:04:50.194 "bdev_raid_add_base_bdev", 00:04:50.194 "bdev_raid_delete", 00:04:50.194 "bdev_raid_create", 00:04:50.194 "bdev_raid_get_bdevs", 00:04:50.194 "bdev_error_inject_error", 00:04:50.194 "bdev_error_delete", 00:04:50.194 "bdev_error_create", 00:04:50.194 "bdev_split_delete", 00:04:50.194 "bdev_split_create", 00:04:50.194 "bdev_delay_delete", 00:04:50.194 "bdev_delay_create", 00:04:50.194 "bdev_delay_update_latency", 00:04:50.194 "bdev_zone_block_delete", 00:04:50.194 "bdev_zone_block_create", 00:04:50.194 "blobfs_create", 00:04:50.194 "blobfs_detect", 00:04:50.194 "blobfs_set_cache_size", 00:04:50.194 "bdev_aio_delete", 00:04:50.194 "bdev_aio_rescan", 00:04:50.194 "bdev_aio_create", 00:04:50.194 "bdev_ftl_set_property", 00:04:50.194 "bdev_ftl_get_properties", 00:04:50.194 "bdev_ftl_get_stats", 00:04:50.194 "bdev_ftl_unmap", 00:04:50.194 "bdev_ftl_unload", 00:04:50.194 "bdev_ftl_delete", 00:04:50.194 "bdev_ftl_load", 00:04:50.195 "bdev_ftl_create", 00:04:50.195 "bdev_virtio_attach_controller", 00:04:50.195 "bdev_virtio_scsi_get_devices", 00:04:50.195 "bdev_virtio_detach_controller", 00:04:50.195 "bdev_virtio_blk_set_hotplug", 00:04:50.195 "bdev_iscsi_delete", 00:04:50.195 "bdev_iscsi_create", 00:04:50.195 "bdev_iscsi_set_options", 00:04:50.195 "accel_error_inject_error", 00:04:50.195 "ioat_scan_accel_module", 00:04:50.195 "dsa_scan_accel_module", 00:04:50.195 "iaa_scan_accel_module", 00:04:50.195 "keyring_file_remove_key", 00:04:50.195 "keyring_file_add_key", 00:04:50.195 "keyring_linux_set_options", 00:04:50.195 "fsdev_aio_delete", 00:04:50.195 "fsdev_aio_create", 00:04:50.195 "iscsi_get_histogram", 00:04:50.195 "iscsi_enable_histogram", 00:04:50.195 "iscsi_set_options", 00:04:50.195 "iscsi_get_auth_groups", 00:04:50.195 "iscsi_auth_group_remove_secret", 00:04:50.195 "iscsi_auth_group_add_secret", 00:04:50.195 "iscsi_delete_auth_group", 00:04:50.195 "iscsi_create_auth_group", 00:04:50.195 "iscsi_set_discovery_auth", 00:04:50.195 "iscsi_get_options", 00:04:50.195 "iscsi_target_node_request_logout", 00:04:50.195 "iscsi_target_node_set_redirect", 00:04:50.195 "iscsi_target_node_set_auth", 00:04:50.195 "iscsi_target_node_add_lun", 00:04:50.195 "iscsi_get_stats", 00:04:50.195 "iscsi_get_connections", 00:04:50.195 "iscsi_portal_group_set_auth", 00:04:50.195 "iscsi_start_portal_group", 00:04:50.195 "iscsi_delete_portal_group", 00:04:50.195 "iscsi_create_portal_group", 00:04:50.195 "iscsi_get_portal_groups", 00:04:50.195 "iscsi_delete_target_node", 00:04:50.195 "iscsi_target_node_remove_pg_ig_maps", 00:04:50.195 "iscsi_target_node_add_pg_ig_maps", 00:04:50.195 "iscsi_create_target_node", 00:04:50.195 "iscsi_get_target_nodes", 00:04:50.195 "iscsi_delete_initiator_group", 00:04:50.195 "iscsi_initiator_group_remove_initiators", 00:04:50.195 "iscsi_initiator_group_add_initiators", 00:04:50.195 "iscsi_create_initiator_group", 00:04:50.195 "iscsi_get_initiator_groups", 00:04:50.195 "nvmf_set_crdt", 00:04:50.195 "nvmf_set_config", 00:04:50.195 "nvmf_set_max_subsystems", 00:04:50.195 "nvmf_stop_mdns_prr", 00:04:50.195 "nvmf_publish_mdns_prr", 00:04:50.195 "nvmf_subsystem_get_listeners", 00:04:50.195 "nvmf_subsystem_get_qpairs", 00:04:50.195 "nvmf_subsystem_get_controllers", 00:04:50.195 "nvmf_get_stats", 00:04:50.195 "nvmf_get_transports", 00:04:50.195 "nvmf_create_transport", 00:04:50.195 "nvmf_get_targets", 00:04:50.195 "nvmf_delete_target", 00:04:50.195 "nvmf_create_target", 00:04:50.195 "nvmf_subsystem_allow_any_host", 00:04:50.195 "nvmf_subsystem_set_keys", 00:04:50.195 "nvmf_subsystem_remove_host", 00:04:50.195 "nvmf_subsystem_add_host", 00:04:50.195 "nvmf_ns_remove_host", 00:04:50.195 "nvmf_ns_add_host", 00:04:50.195 "nvmf_subsystem_remove_ns", 00:04:50.195 "nvmf_subsystem_set_ns_ana_group", 00:04:50.195 "nvmf_subsystem_add_ns", 00:04:50.195 "nvmf_subsystem_listener_set_ana_state", 00:04:50.195 "nvmf_discovery_get_referrals", 00:04:50.195 "nvmf_discovery_remove_referral", 00:04:50.195 "nvmf_discovery_add_referral", 00:04:50.195 "nvmf_subsystem_remove_listener", 00:04:50.195 "nvmf_subsystem_add_listener", 00:04:50.195 "nvmf_delete_subsystem", 00:04:50.195 "nvmf_create_subsystem", 00:04:50.195 "nvmf_get_subsystems", 00:04:50.195 "env_dpdk_get_mem_stats", 00:04:50.195 "nbd_get_disks", 00:04:50.195 "nbd_stop_disk", 00:04:50.195 "nbd_start_disk", 00:04:50.195 "ublk_recover_disk", 00:04:50.195 "ublk_get_disks", 00:04:50.195 "ublk_stop_disk", 00:04:50.195 "ublk_start_disk", 00:04:50.195 "ublk_destroy_target", 00:04:50.195 "ublk_create_target", 00:04:50.195 "virtio_blk_create_transport", 00:04:50.195 "virtio_blk_get_transports", 00:04:50.195 "vhost_controller_set_coalescing", 00:04:50.195 "vhost_get_controllers", 00:04:50.195 "vhost_delete_controller", 00:04:50.195 "vhost_create_blk_controller", 00:04:50.195 "vhost_scsi_controller_remove_target", 00:04:50.195 "vhost_scsi_controller_add_target", 00:04:50.195 "vhost_start_scsi_controller", 00:04:50.195 "vhost_create_scsi_controller", 00:04:50.195 "thread_set_cpumask", 00:04:50.195 "scheduler_set_options", 00:04:50.195 "framework_get_governor", 00:04:50.195 "framework_get_scheduler", 00:04:50.195 "framework_set_scheduler", 00:04:50.195 "framework_get_reactors", 00:04:50.195 "thread_get_io_channels", 00:04:50.195 "thread_get_pollers", 00:04:50.195 "thread_get_stats", 00:04:50.195 "framework_monitor_context_switch", 00:04:50.195 "spdk_kill_instance", 00:04:50.195 "log_enable_timestamps", 00:04:50.195 "log_get_flags", 00:04:50.195 "log_clear_flag", 00:04:50.195 "log_set_flag", 00:04:50.195 "log_get_level", 00:04:50.195 "log_set_level", 00:04:50.195 "log_get_print_level", 00:04:50.195 "log_set_print_level", 00:04:50.195 "framework_enable_cpumask_locks", 00:04:50.195 "framework_disable_cpumask_locks", 00:04:50.195 "framework_wait_init", 00:04:50.195 "framework_start_init", 00:04:50.195 "scsi_get_devices", 00:04:50.195 "bdev_get_histogram", 00:04:50.195 "bdev_enable_histogram", 00:04:50.195 "bdev_set_qos_limit", 00:04:50.195 "bdev_set_qd_sampling_period", 00:04:50.195 "bdev_get_bdevs", 00:04:50.195 "bdev_reset_iostat", 00:04:50.195 "bdev_get_iostat", 00:04:50.195 "bdev_examine", 00:04:50.195 "bdev_wait_for_examine", 00:04:50.195 "bdev_set_options", 00:04:50.195 "accel_get_stats", 00:04:50.195 "accel_set_options", 00:04:50.195 "accel_set_driver", 00:04:50.195 "accel_crypto_key_destroy", 00:04:50.195 "accel_crypto_keys_get", 00:04:50.195 "accel_crypto_key_create", 00:04:50.195 "accel_assign_opc", 00:04:50.195 "accel_get_module_info", 00:04:50.195 "accel_get_opc_assignments", 00:04:50.195 "vmd_rescan", 00:04:50.195 "vmd_remove_device", 00:04:50.195 "vmd_enable", 00:04:50.195 "sock_get_default_impl", 00:04:50.195 "sock_set_default_impl", 00:04:50.195 "sock_impl_set_options", 00:04:50.195 "sock_impl_get_options", 00:04:50.195 "iobuf_get_stats", 00:04:50.195 "iobuf_set_options", 00:04:50.195 "keyring_get_keys", 00:04:50.195 "framework_get_pci_devices", 00:04:50.195 "framework_get_config", 00:04:50.195 "framework_get_subsystems", 00:04:50.195 "fsdev_set_opts", 00:04:50.195 "fsdev_get_opts", 00:04:50.195 "trace_get_info", 00:04:50.195 "trace_get_tpoint_group_mask", 00:04:50.195 "trace_disable_tpoint_group", 00:04:50.196 "trace_enable_tpoint_group", 00:04:50.196 "trace_clear_tpoint_mask", 00:04:50.196 "trace_set_tpoint_mask", 00:04:50.196 "notify_get_notifications", 00:04:50.196 "notify_get_types", 00:04:50.196 "spdk_get_version", 00:04:50.196 "rpc_get_methods" 00:04:50.196 ] 00:04:50.196 21:36:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.196 21:36:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:50.196 21:36:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57840 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57840 ']' 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57840 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57840 00:04:50.196 killing process with pid 57840 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57840' 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57840 00:04:50.196 21:36:09 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57840 00:04:53.501 ************************************ 00:04:53.501 END TEST spdkcli_tcp 00:04:53.501 ************************************ 00:04:53.501 00:04:53.501 real 0m4.700s 00:04:53.501 user 0m8.018s 00:04:53.501 sys 0m0.797s 00:04:53.501 21:36:11 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.501 21:36:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.501 21:36:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.501 21:36:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.501 21:36:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.501 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:04:53.501 ************************************ 00:04:53.501 START TEST dpdk_mem_utility 00:04:53.501 ************************************ 00:04:53.501 21:36:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.501 * Looking for test storage... 00:04:53.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:53.501 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:53.501 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:04:53.501 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:53.501 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.501 21:36:12 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:53.501 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.501 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:53.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.501 --rc genhtml_branch_coverage=1 00:04:53.501 --rc genhtml_function_coverage=1 00:04:53.501 --rc genhtml_legend=1 00:04:53.501 --rc geninfo_all_blocks=1 00:04:53.501 --rc geninfo_unexecuted_blocks=1 00:04:53.501 00:04:53.501 ' 00:04:53.501 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:53.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.501 --rc genhtml_branch_coverage=1 00:04:53.501 --rc genhtml_function_coverage=1 00:04:53.501 --rc genhtml_legend=1 00:04:53.501 --rc geninfo_all_blocks=1 00:04:53.501 --rc geninfo_unexecuted_blocks=1 00:04:53.501 00:04:53.501 ' 00:04:53.502 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:53.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.502 --rc genhtml_branch_coverage=1 00:04:53.502 --rc genhtml_function_coverage=1 00:04:53.502 --rc genhtml_legend=1 00:04:53.502 --rc geninfo_all_blocks=1 00:04:53.502 --rc geninfo_unexecuted_blocks=1 00:04:53.502 00:04:53.502 ' 00:04:53.502 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:53.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.502 --rc genhtml_branch_coverage=1 00:04:53.502 --rc genhtml_function_coverage=1 00:04:53.502 --rc genhtml_legend=1 00:04:53.502 --rc geninfo_all_blocks=1 00:04:53.502 --rc geninfo_unexecuted_blocks=1 00:04:53.502 00:04:53.502 ' 00:04:53.502 21:36:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:53.502 21:36:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57967 00:04:53.502 21:36:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.502 21:36:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57967 00:04:53.502 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 57967 ']' 00:04:53.502 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.502 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:53.502 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.502 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:53.502 21:36:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.502 [2024-09-29 21:36:12.223933] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:53.502 [2024-09-29 21:36:12.224135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57967 ] 00:04:53.502 [2024-09-29 21:36:12.389070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.762 [2024-09-29 21:36:12.630183] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.701 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:54.701 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:54.701 21:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:54.701 21:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:54.701 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.701 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.701 { 00:04:54.701 "filename": "/tmp/spdk_mem_dump.txt" 00:04:54.701 } 00:04:54.701 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:54.701 21:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:54.701 DPDK memory size 866.000000 MiB in 1 heap(s) 00:04:54.701 1 heaps totaling size 866.000000 MiB 00:04:54.701 size: 866.000000 MiB heap id: 0 00:04:54.701 end heaps---------- 00:04:54.701 9 mempools totaling size 642.649841 MiB 00:04:54.701 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:54.701 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:54.701 size: 92.545471 MiB name: bdev_io_57967 00:04:54.701 size: 51.011292 MiB name: evtpool_57967 00:04:54.701 size: 50.003479 MiB name: msgpool_57967 00:04:54.701 size: 36.509338 MiB name: fsdev_io_57967 00:04:54.701 size: 21.763794 MiB name: PDU_Pool 00:04:54.701 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:54.701 size: 0.026123 MiB name: Session_Pool 00:04:54.701 end mempools------- 00:04:54.701 6 memzones totaling size 4.142822 MiB 00:04:54.701 size: 1.000366 MiB name: RG_ring_0_57967 00:04:54.701 size: 1.000366 MiB name: RG_ring_1_57967 00:04:54.701 size: 1.000366 MiB name: RG_ring_4_57967 00:04:54.701 size: 1.000366 MiB name: RG_ring_5_57967 00:04:54.701 size: 0.125366 MiB name: RG_ring_2_57967 00:04:54.701 size: 0.015991 MiB name: RG_ring_3_57967 00:04:54.701 end memzones------- 00:04:54.701 21:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:54.963 heap id: 0 total size: 866.000000 MiB number of busy elements: 312 number of free elements: 19 00:04:54.963 list of free elements. size: 19.914307 MiB 00:04:54.963 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:54.963 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:54.963 element at address: 0x200009600000 with size: 1.995972 MiB 00:04:54.963 element at address: 0x20000d800000 with size: 1.995972 MiB 00:04:54.963 element at address: 0x200007000000 with size: 1.991028 MiB 00:04:54.963 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:04:54.963 element at address: 0x20001c300040 with size: 0.999939 MiB 00:04:54.963 element at address: 0x20001c400000 with size: 0.999084 MiB 00:04:54.963 element at address: 0x200035000000 with size: 0.994324 MiB 00:04:54.963 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:04:54.963 element at address: 0x20001c700040 with size: 0.936401 MiB 00:04:54.963 element at address: 0x200000200000 with size: 0.832153 MiB 00:04:54.963 element at address: 0x20001de00000 with size: 0.562195 MiB 00:04:54.963 element at address: 0x200003e00000 with size: 0.490417 MiB 00:04:54.963 element at address: 0x20001c000000 with size: 0.488953 MiB 00:04:54.963 element at address: 0x20001c800000 with size: 0.485413 MiB 00:04:54.963 element at address: 0x200015e00000 with size: 0.443237 MiB 00:04:54.963 element at address: 0x20002b200000 with size: 0.390442 MiB 00:04:54.963 element at address: 0x200003a00000 with size: 0.352844 MiB 00:04:54.963 list of standard malloc elements. size: 199.286987 MiB 00:04:54.963 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:04:54.963 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:04:54.963 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:04:54.963 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:04:54.963 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:04:54.963 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:54.963 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:04:54.963 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:54.963 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:04:54.963 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:04:54.963 element at address: 0x200015dff040 with size: 0.000305 MiB 00:04:54.963 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:54.963 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003aff700 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:04:54.963 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:54.963 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:04:54.963 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:04:54.963 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:04:54.963 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:04:54.963 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:04:54.963 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:04:54.963 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dff180 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dff280 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dff380 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dff480 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dff580 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dff680 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dff780 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dff880 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dff980 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e71780 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e71880 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e71980 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e72080 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015e72180 with size: 0.000244 MiB 00:04:54.964 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20002b264040 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:04:54.964 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:04:54.965 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:04:54.965 list of memzone associated elements. size: 646.798706 MiB 00:04:54.965 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:04:54.965 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:54.965 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:04:54.965 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:54.965 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:04:54.965 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57967_0 00:04:54.965 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:54.965 associated memzone info: size: 48.002930 MiB name: MP_evtpool_57967_0 00:04:54.965 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:54.965 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57967_0 00:04:54.965 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:04:54.965 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57967_0 00:04:54.965 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:04:54.965 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:54.965 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:04:54.965 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:54.965 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:54.965 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_57967 00:04:54.965 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:54.965 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57967 00:04:54.965 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:54.965 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57967 00:04:54.965 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:04:54.965 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:54.965 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:04:54.965 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:54.965 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:04:54.965 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:54.965 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:04:54.965 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:54.965 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:54.965 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57967 00:04:54.965 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:54.965 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57967 00:04:54.965 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:04:54.965 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57967 00:04:54.965 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:04:54.965 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57967 00:04:54.965 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:04:54.965 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57967 00:04:54.965 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:04:54.965 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57967 00:04:54.965 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:04:54.965 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:54.965 element at address: 0x200015e72280 with size: 0.500549 MiB 00:04:54.965 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:54.965 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:04:54.965 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:54.965 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:04:54.965 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57967 00:04:54.965 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:04:54.965 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:54.965 element at address: 0x20002b264140 with size: 0.023804 MiB 00:04:54.965 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:54.965 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:04:54.965 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57967 00:04:54.965 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:04:54.965 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:54.965 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:04:54.965 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57967 00:04:54.965 element at address: 0x200003aff800 with size: 0.000366 MiB 00:04:54.965 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57967 00:04:54.965 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:04:54.965 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57967 00:04:54.965 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:04:54.965 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:54.965 21:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:54.965 21:36:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57967 00:04:54.965 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 57967 ']' 00:04:54.965 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 57967 00:04:54.966 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:54.966 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:54.966 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57967 00:04:54.966 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:54.966 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:54.966 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57967' 00:04:54.966 killing process with pid 57967 00:04:54.966 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 57967 00:04:54.966 21:36:13 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 57967 00:04:57.504 00:04:57.504 real 0m4.522s 00:04:57.504 user 0m4.219s 00:04:57.504 sys 0m0.747s 00:04:57.504 21:36:16 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.504 21:36:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.504 ************************************ 00:04:57.504 END TEST dpdk_mem_utility 00:04:57.504 ************************************ 00:04:57.504 21:36:16 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.504 21:36:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.504 21:36:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.504 21:36:16 -- common/autotest_common.sh@10 -- # set +x 00:04:57.763 ************************************ 00:04:57.763 START TEST event 00:04:57.763 ************************************ 00:04:57.763 21:36:16 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.763 * Looking for test storage... 00:04:57.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.763 21:36:16 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:57.763 21:36:16 event -- common/autotest_common.sh@1681 -- # lcov --version 00:04:57.764 21:36:16 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:57.764 21:36:16 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:57.764 21:36:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.764 21:36:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.764 21:36:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.764 21:36:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.764 21:36:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.764 21:36:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.764 21:36:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.764 21:36:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.764 21:36:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.764 21:36:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.764 21:36:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.764 21:36:16 event -- scripts/common.sh@344 -- # case "$op" in 00:04:57.764 21:36:16 event -- scripts/common.sh@345 -- # : 1 00:04:57.764 21:36:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.764 21:36:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.764 21:36:16 event -- scripts/common.sh@365 -- # decimal 1 00:04:57.764 21:36:16 event -- scripts/common.sh@353 -- # local d=1 00:04:57.764 21:36:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.764 21:36:16 event -- scripts/common.sh@355 -- # echo 1 00:04:57.764 21:36:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.764 21:36:16 event -- scripts/common.sh@366 -- # decimal 2 00:04:57.764 21:36:16 event -- scripts/common.sh@353 -- # local d=2 00:04:57.764 21:36:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.764 21:36:16 event -- scripts/common.sh@355 -- # echo 2 00:04:57.764 21:36:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.764 21:36:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.764 21:36:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.764 21:36:16 event -- scripts/common.sh@368 -- # return 0 00:04:57.764 21:36:16 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.764 21:36:16 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:57.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.764 --rc genhtml_branch_coverage=1 00:04:57.764 --rc genhtml_function_coverage=1 00:04:57.764 --rc genhtml_legend=1 00:04:57.764 --rc geninfo_all_blocks=1 00:04:57.764 --rc geninfo_unexecuted_blocks=1 00:04:57.764 00:04:57.764 ' 00:04:57.764 21:36:16 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:57.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.764 --rc genhtml_branch_coverage=1 00:04:57.764 --rc genhtml_function_coverage=1 00:04:57.764 --rc genhtml_legend=1 00:04:57.764 --rc geninfo_all_blocks=1 00:04:57.764 --rc geninfo_unexecuted_blocks=1 00:04:57.764 00:04:57.764 ' 00:04:57.764 21:36:16 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:57.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.764 --rc genhtml_branch_coverage=1 00:04:57.764 --rc genhtml_function_coverage=1 00:04:57.764 --rc genhtml_legend=1 00:04:57.764 --rc geninfo_all_blocks=1 00:04:57.764 --rc geninfo_unexecuted_blocks=1 00:04:57.764 00:04:57.764 ' 00:04:57.764 21:36:16 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:57.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.764 --rc genhtml_branch_coverage=1 00:04:57.764 --rc genhtml_function_coverage=1 00:04:57.764 --rc genhtml_legend=1 00:04:57.764 --rc geninfo_all_blocks=1 00:04:57.764 --rc geninfo_unexecuted_blocks=1 00:04:57.764 00:04:57.764 ' 00:04:57.764 21:36:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:57.764 21:36:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.764 21:36:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.764 21:36:16 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:57.764 21:36:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.764 21:36:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.764 ************************************ 00:04:57.764 START TEST event_perf 00:04:57.764 ************************************ 00:04:57.764 21:36:16 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.024 Running I/O for 1 seconds...[2024-09-29 21:36:16.786155] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:58.024 [2024-09-29 21:36:16.786301] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58081 ] 00:04:58.024 [2024-09-29 21:36:16.954958] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.284 [2024-09-29 21:36:17.207500] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.284 [2024-09-29 21:36:17.207731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.284 [2024-09-29 21:36:17.207893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.284 Running I/O for 1 seconds...[2024-09-29 21:36:17.207930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.695 00:04:59.695 lcore 0: 79473 00:04:59.695 lcore 1: 79476 00:04:59.695 lcore 2: 79479 00:04:59.695 lcore 3: 79478 00:04:59.695 done. 00:04:59.695 00:04:59.695 real 0m1.887s 00:04:59.695 user 0m4.607s 00:04:59.695 sys 0m0.153s 00:04:59.695 21:36:18 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.695 21:36:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.695 ************************************ 00:04:59.695 END TEST event_perf 00:04:59.695 ************************************ 00:04:59.955 21:36:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.955 21:36:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:59.955 21:36:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.955 21:36:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.955 ************************************ 00:04:59.955 START TEST event_reactor 00:04:59.955 ************************************ 00:04:59.955 21:36:18 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.955 [2024-09-29 21:36:18.744598] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:59.955 [2024-09-29 21:36:18.744703] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58126 ] 00:04:59.955 [2024-09-29 21:36:18.908873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.215 [2024-09-29 21:36:19.156302] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.595 test_start 00:05:01.595 oneshot 00:05:01.595 tick 100 00:05:01.595 tick 100 00:05:01.595 tick 250 00:05:01.595 tick 100 00:05:01.595 tick 100 00:05:01.595 tick 100 00:05:01.595 tick 250 00:05:01.595 tick 500 00:05:01.595 tick 100 00:05:01.595 tick 100 00:05:01.595 tick 250 00:05:01.595 tick 100 00:05:01.595 tick 100 00:05:01.595 test_end 00:05:01.595 00:05:01.595 real 0m1.863s 00:05:01.595 user 0m1.624s 00:05:01.595 sys 0m0.130s 00:05:01.595 ************************************ 00:05:01.595 END TEST event_reactor 00:05:01.595 ************************************ 00:05:01.595 21:36:20 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.595 21:36:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:01.855 21:36:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.855 21:36:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:01.855 21:36:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.855 21:36:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.855 ************************************ 00:05:01.855 START TEST event_reactor_perf 00:05:01.855 ************************************ 00:05:01.855 21:36:20 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.855 [2024-09-29 21:36:20.676042] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:01.855 [2024-09-29 21:36:20.676160] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58162 ] 00:05:02.114 [2024-09-29 21:36:20.842052] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.114 [2024-09-29 21:36:21.077003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.495 test_start 00:05:03.495 test_end 00:05:03.495 Performance: 414480 events per second 00:05:03.755 ************************************ 00:05:03.755 END TEST event_reactor_perf 00:05:03.755 ************************************ 00:05:03.755 00:05:03.755 real 0m1.850s 00:05:03.755 user 0m1.607s 00:05:03.755 sys 0m0.134s 00:05:03.755 21:36:22 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.755 21:36:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.755 21:36:22 event -- event/event.sh@49 -- # uname -s 00:05:03.755 21:36:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:03.755 21:36:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:03.755 21:36:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.755 21:36:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.755 21:36:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.755 ************************************ 00:05:03.755 START TEST event_scheduler 00:05:03.755 ************************************ 00:05:03.755 21:36:22 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:03.755 * Looking for test storage... 00:05:03.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:03.755 21:36:22 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.755 21:36:22 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.755 21:36:22 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:04.015 21:36:22 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:04.015 21:36:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.015 21:36:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.015 21:36:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.015 21:36:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.015 21:36:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.015 21:36:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.015 21:36:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.015 21:36:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.015 21:36:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.016 21:36:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.016 --rc genhtml_branch_coverage=1 00:05:04.016 --rc genhtml_function_coverage=1 00:05:04.016 --rc genhtml_legend=1 00:05:04.016 --rc geninfo_all_blocks=1 00:05:04.016 --rc geninfo_unexecuted_blocks=1 00:05:04.016 00:05:04.016 ' 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.016 --rc genhtml_branch_coverage=1 00:05:04.016 --rc genhtml_function_coverage=1 00:05:04.016 --rc genhtml_legend=1 00:05:04.016 --rc geninfo_all_blocks=1 00:05:04.016 --rc geninfo_unexecuted_blocks=1 00:05:04.016 00:05:04.016 ' 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.016 --rc genhtml_branch_coverage=1 00:05:04.016 --rc genhtml_function_coverage=1 00:05:04.016 --rc genhtml_legend=1 00:05:04.016 --rc geninfo_all_blocks=1 00:05:04.016 --rc geninfo_unexecuted_blocks=1 00:05:04.016 00:05:04.016 ' 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:04.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.016 --rc genhtml_branch_coverage=1 00:05:04.016 --rc genhtml_function_coverage=1 00:05:04.016 --rc genhtml_legend=1 00:05:04.016 --rc geninfo_all_blocks=1 00:05:04.016 --rc geninfo_unexecuted_blocks=1 00:05:04.016 00:05:04.016 ' 00:05:04.016 21:36:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:04.016 21:36:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:04.016 21:36:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58244 00:05:04.016 21:36:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.016 21:36:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58244 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58244 ']' 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:04.016 21:36:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.016 [2024-09-29 21:36:22.856424] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:04.016 [2024-09-29 21:36:22.856627] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58244 ] 00:05:04.275 [2024-09-29 21:36:23.020727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.535 [2024-09-29 21:36:23.290584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.535 [2024-09-29 21:36:23.290753] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.535 [2024-09-29 21:36:23.290844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.535 [2024-09-29 21:36:23.290892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.795 21:36:23 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.795 21:36:23 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:04.795 21:36:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:04.795 21:36:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.795 21:36:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.795 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.795 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.795 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.795 POWER: Cannot set governor of lcore 0 to performance 00:05:04.795 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.795 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.795 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:04.795 POWER: Cannot set governor of lcore 0 to userspace 00:05:04.795 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:04.795 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:04.795 POWER: Unable to set Power Management Environment for lcore 0 00:05:04.795 [2024-09-29 21:36:23.687899] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:04.795 [2024-09-29 21:36:23.687919] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:04.795 [2024-09-29 21:36:23.687930] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:04.795 [2024-09-29 21:36:23.687952] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:04.795 [2024-09-29 21:36:23.687961] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:04.795 [2024-09-29 21:36:23.687987] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:04.795 21:36:23 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.795 21:36:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:04.795 21:36:23 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.795 21:36:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.366 [2024-09-29 21:36:24.046953] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:05.366 21:36:24 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.366 21:36:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:05.366 21:36:24 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.366 21:36:24 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.366 21:36:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.366 ************************************ 00:05:05.366 START TEST scheduler_create_thread 00:05:05.366 ************************************ 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.366 2 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.366 3 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.366 4 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.366 5 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.366 6 00:05:05.366 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.367 7 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.367 8 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.367 9 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.367 10 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.367 21:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.305 21:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.305 21:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:06.305 21:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:06.305 21:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.305 21:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.244 21:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.244 21:36:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.244 21:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.244 21:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.183 21:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.183 21:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:08.183 21:36:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:08.183 21:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.183 21:36:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.753 ************************************ 00:05:08.753 END TEST scheduler_create_thread 00:05:08.753 ************************************ 00:05:08.753 21:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.753 00:05:08.753 real 0m3.553s 00:05:08.753 user 0m0.029s 00:05:08.753 sys 0m0.008s 00:05:08.753 21:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.753 21:36:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.753 21:36:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.753 21:36:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58244 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58244 ']' 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58244 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58244 00:05:08.753 killing process with pid 58244 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58244' 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58244 00:05:08.753 21:36:27 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58244 00:05:09.012 [2024-09-29 21:36:27.995078] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:10.922 ************************************ 00:05:10.922 END TEST event_scheduler 00:05:10.922 ************************************ 00:05:10.922 00:05:10.922 real 0m6.863s 00:05:10.922 user 0m12.270s 00:05:10.922 sys 0m0.589s 00:05:10.922 21:36:29 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.922 21:36:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.922 21:36:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:10.922 21:36:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:10.922 21:36:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.922 21:36:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.922 21:36:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.922 ************************************ 00:05:10.922 START TEST app_repeat 00:05:10.922 ************************************ 00:05:10.922 21:36:29 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58361 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58361' 00:05:10.922 Process app_repeat pid: 58361 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:10.922 spdk_app_start Round 0 00:05:10.922 21:36:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58361 /var/tmp/spdk-nbd.sock 00:05:10.922 21:36:29 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58361 ']' 00:05:10.922 21:36:29 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:10.922 21:36:29 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.922 21:36:29 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:10.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:10.922 21:36:29 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.922 21:36:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:10.922 [2024-09-29 21:36:29.566507] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:10.922 [2024-09-29 21:36:29.566692] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58361 ] 00:05:10.922 [2024-09-29 21:36:29.734378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.182 [2024-09-29 21:36:29.984996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.182 [2024-09-29 21:36:29.985078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.442 21:36:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.442 21:36:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:11.442 21:36:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.702 Malloc0 00:05:11.961 21:36:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.221 Malloc1 00:05:12.221 21:36:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.221 21:36:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.221 /dev/nbd0 00:05:12.481 21:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.481 21:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.481 1+0 records in 00:05:12.481 1+0 records out 00:05:12.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196421 s, 20.9 MB/s 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:12.481 21:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.481 21:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.481 21:36:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.481 /dev/nbd1 00:05:12.481 21:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.481 21:36:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:12.481 21:36:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:12.482 21:36:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:12.482 21:36:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:12.482 21:36:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:12.482 21:36:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:12.482 21:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:12.482 21:36:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:12.482 21:36:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.482 1+0 records in 00:05:12.482 1+0 records out 00:05:12.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491427 s, 8.3 MB/s 00:05:12.741 21:36:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.741 21:36:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:12.741 21:36:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.741 21:36:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:12.741 21:36:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:12.741 21:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.741 21:36:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.741 21:36:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.741 21:36:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.741 21:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.741 21:36:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.741 { 00:05:12.741 "nbd_device": "/dev/nbd0", 00:05:12.741 "bdev_name": "Malloc0" 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "nbd_device": "/dev/nbd1", 00:05:12.741 "bdev_name": "Malloc1" 00:05:12.741 } 00:05:12.741 ]' 00:05:12.741 21:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.741 { 00:05:12.741 "nbd_device": "/dev/nbd0", 00:05:12.741 "bdev_name": "Malloc0" 00:05:12.741 }, 00:05:12.741 { 00:05:12.741 "nbd_device": "/dev/nbd1", 00:05:12.741 "bdev_name": "Malloc1" 00:05:12.741 } 00:05:12.741 ]' 00:05:12.742 21:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.002 /dev/nbd1' 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.002 /dev/nbd1' 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.002 256+0 records in 00:05:13.002 256+0 records out 00:05:13.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142417 s, 73.6 MB/s 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.002 256+0 records in 00:05:13.002 256+0 records out 00:05:13.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282788 s, 37.1 MB/s 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.002 256+0 records in 00:05:13.002 256+0 records out 00:05:13.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029513 s, 35.5 MB/s 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.002 21:36:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.262 21:36:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.522 21:36:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.781 21:36:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.781 21:36:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.041 21:36:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.431 [2024-09-29 21:36:34.201461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.431 [2024-09-29 21:36:34.400584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.431 [2024-09-29 21:36:34.400586] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.719 [2024-09-29 21:36:34.579560] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.719 [2024-09-29 21:36:34.579666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.110 spdk_app_start Round 1 00:05:17.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.110 21:36:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.110 21:36:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.110 21:36:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58361 /var/tmp/spdk-nbd.sock 00:05:17.110 21:36:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58361 ']' 00:05:17.110 21:36:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.111 21:36:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.111 21:36:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.111 21:36:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.111 21:36:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.370 21:36:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.370 21:36:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:17.370 21:36:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.630 Malloc0 00:05:17.630 21:36:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.889 Malloc1 00:05:17.889 21:36:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.889 21:36:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.149 /dev/nbd0 00:05:18.149 21:36:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.149 21:36:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.149 1+0 records in 00:05:18.149 1+0 records out 00:05:18.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311996 s, 13.1 MB/s 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:18.149 21:36:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:18.149 21:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.149 21:36:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.149 21:36:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.149 /dev/nbd1 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.410 1+0 records in 00:05:18.410 1+0 records out 00:05:18.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354244 s, 11.6 MB/s 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:18.410 21:36:37 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.410 { 00:05:18.410 "nbd_device": "/dev/nbd0", 00:05:18.410 "bdev_name": "Malloc0" 00:05:18.410 }, 00:05:18.410 { 00:05:18.410 "nbd_device": "/dev/nbd1", 00:05:18.410 "bdev_name": "Malloc1" 00:05:18.410 } 00:05:18.410 ]' 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.410 { 00:05:18.410 "nbd_device": "/dev/nbd0", 00:05:18.410 "bdev_name": "Malloc0" 00:05:18.410 }, 00:05:18.410 { 00:05:18.410 "nbd_device": "/dev/nbd1", 00:05:18.410 "bdev_name": "Malloc1" 00:05:18.410 } 00:05:18.410 ]' 00:05:18.410 21:36:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.670 21:36:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.670 /dev/nbd1' 00:05:18.670 21:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.670 /dev/nbd1' 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.671 256+0 records in 00:05:18.671 256+0 records out 00:05:18.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126566 s, 82.8 MB/s 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.671 256+0 records in 00:05:18.671 256+0 records out 00:05:18.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025173 s, 41.7 MB/s 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.671 256+0 records in 00:05:18.671 256+0 records out 00:05:18.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269178 s, 39.0 MB/s 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.671 21:36:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.931 21:36:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.190 21:36:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.450 21:36:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.450 21:36:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.710 21:36:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.089 [2024-09-29 21:36:40.022888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.349 [2024-09-29 21:36:40.252829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.349 [2024-09-29 21:36:40.252849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.608 [2024-09-29 21:36:40.470112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.608 [2024-09-29 21:36:40.470335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.985 spdk_app_start Round 2 00:05:22.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.985 21:36:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.985 21:36:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:22.985 21:36:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58361 /var/tmp/spdk-nbd.sock 00:05:22.985 21:36:41 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58361 ']' 00:05:22.985 21:36:41 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.985 21:36:41 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.985 21:36:41 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.985 21:36:41 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.985 21:36:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.985 21:36:41 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.985 21:36:41 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.985 21:36:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.244 Malloc0 00:05:23.244 21:36:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.503 Malloc1 00:05:23.503 21:36:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.503 21:36:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.762 /dev/nbd0 00:05:23.762 21:36:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.762 21:36:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.762 1+0 records in 00:05:23.762 1+0 records out 00:05:23.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442152 s, 9.3 MB/s 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:23.762 21:36:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:23.762 21:36:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.763 21:36:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.763 21:36:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.022 /dev/nbd1 00:05:24.022 21:36:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.022 21:36:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.022 1+0 records in 00:05:24.022 1+0 records out 00:05:24.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496705 s, 8.2 MB/s 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:24.022 21:36:42 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:24.022 21:36:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.022 21:36:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.022 21:36:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.022 21:36:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.022 21:36:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.281 { 00:05:24.281 "nbd_device": "/dev/nbd0", 00:05:24.281 "bdev_name": "Malloc0" 00:05:24.281 }, 00:05:24.281 { 00:05:24.281 "nbd_device": "/dev/nbd1", 00:05:24.281 "bdev_name": "Malloc1" 00:05:24.281 } 00:05:24.281 ]' 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.281 { 00:05:24.281 "nbd_device": "/dev/nbd0", 00:05:24.281 "bdev_name": "Malloc0" 00:05:24.281 }, 00:05:24.281 { 00:05:24.281 "nbd_device": "/dev/nbd1", 00:05:24.281 "bdev_name": "Malloc1" 00:05:24.281 } 00:05:24.281 ]' 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.281 /dev/nbd1' 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.281 /dev/nbd1' 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.281 21:36:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.282 256+0 records in 00:05:24.282 256+0 records out 00:05:24.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141665 s, 74.0 MB/s 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.282 256+0 records in 00:05:24.282 256+0 records out 00:05:24.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213462 s, 49.1 MB/s 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.282 21:36:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.541 256+0 records in 00:05:24.541 256+0 records out 00:05:24.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244916 s, 42.8 MB/s 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.541 21:36:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.542 21:36:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.802 21:36:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.062 21:36:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.062 21:36:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.632 21:36:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:27.013 [2024-09-29 21:36:45.711501] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.013 [2024-09-29 21:36:45.939724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.013 [2024-09-29 21:36:45.939729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.273 [2024-09-29 21:36:46.156838] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:27.273 [2024-09-29 21:36:46.157025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.654 21:36:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58361 /var/tmp/spdk-nbd.sock 00:05:28.654 21:36:47 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58361 ']' 00:05:28.654 21:36:47 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.654 21:36:47 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.654 21:36:47 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.654 21:36:47 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.654 21:36:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.654 21:36:47 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:28.655 21:36:47 event.app_repeat -- event/event.sh@39 -- # killprocess 58361 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58361 ']' 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58361 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58361 00:05:28.655 killing process with pid 58361 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58361' 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58361 00:05:28.655 21:36:47 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58361 00:05:30.041 spdk_app_start is called in Round 0. 00:05:30.041 Shutdown signal received, stop current app iteration 00:05:30.041 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:30.041 spdk_app_start is called in Round 1. 00:05:30.041 Shutdown signal received, stop current app iteration 00:05:30.041 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:30.041 spdk_app_start is called in Round 2. 00:05:30.041 Shutdown signal received, stop current app iteration 00:05:30.041 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:30.041 spdk_app_start is called in Round 3. 00:05:30.041 Shutdown signal received, stop current app iteration 00:05:30.041 21:36:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:30.041 21:36:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:30.041 00:05:30.041 real 0m19.349s 00:05:30.041 user 0m39.986s 00:05:30.041 sys 0m2.921s 00:05:30.041 21:36:48 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.041 ************************************ 00:05:30.041 END TEST app_repeat 00:05:30.041 ************************************ 00:05:30.041 21:36:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.041 21:36:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:30.041 21:36:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:30.041 21:36:48 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.041 21:36:48 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.041 21:36:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.041 ************************************ 00:05:30.041 START TEST cpu_locks 00:05:30.041 ************************************ 00:05:30.041 21:36:48 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:30.302 * Looking for test storage... 00:05:30.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.302 21:36:49 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:30.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.302 --rc genhtml_branch_coverage=1 00:05:30.302 --rc genhtml_function_coverage=1 00:05:30.302 --rc genhtml_legend=1 00:05:30.302 --rc geninfo_all_blocks=1 00:05:30.302 --rc geninfo_unexecuted_blocks=1 00:05:30.302 00:05:30.302 ' 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:30.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.302 --rc genhtml_branch_coverage=1 00:05:30.302 --rc genhtml_function_coverage=1 00:05:30.302 --rc genhtml_legend=1 00:05:30.302 --rc geninfo_all_blocks=1 00:05:30.302 --rc geninfo_unexecuted_blocks=1 00:05:30.302 00:05:30.302 ' 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:30.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.302 --rc genhtml_branch_coverage=1 00:05:30.302 --rc genhtml_function_coverage=1 00:05:30.302 --rc genhtml_legend=1 00:05:30.302 --rc geninfo_all_blocks=1 00:05:30.302 --rc geninfo_unexecuted_blocks=1 00:05:30.302 00:05:30.302 ' 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:30.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.302 --rc genhtml_branch_coverage=1 00:05:30.302 --rc genhtml_function_coverage=1 00:05:30.302 --rc genhtml_legend=1 00:05:30.302 --rc geninfo_all_blocks=1 00:05:30.302 --rc geninfo_unexecuted_blocks=1 00:05:30.302 00:05:30.302 ' 00:05:30.302 21:36:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:30.302 21:36:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:30.302 21:36:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:30.302 21:36:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.302 21:36:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.302 ************************************ 00:05:30.302 START TEST default_locks 00:05:30.302 ************************************ 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58808 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58808 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58808 ']' 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:30.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:30.302 21:36:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.302 [2024-09-29 21:36:49.272513] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:30.303 [2024-09-29 21:36:49.272737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58808 ] 00:05:30.563 [2024-09-29 21:36:49.442019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.823 [2024-09-29 21:36:49.690132] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.803 21:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.803 21:36:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:31.803 21:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58808 00:05:31.803 21:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58808 00:05:31.803 21:36:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58808 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58808 ']' 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58808 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58808 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58808' 00:05:32.372 killing process with pid 58808 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58808 00:05:32.372 21:36:51 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58808 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58808 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58808 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58808 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58808 ']' 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.914 ERROR: process (pid: 58808) is no longer running 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.914 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58808) - No such process 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.914 00:05:34.914 real 0m4.666s 00:05:34.914 user 0m4.395s 00:05:34.914 sys 0m0.893s 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.914 21:36:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.914 ************************************ 00:05:34.914 END TEST default_locks 00:05:34.914 ************************************ 00:05:34.914 21:36:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:34.914 21:36:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.914 21:36:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.914 21:36:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.175 ************************************ 00:05:35.175 START TEST default_locks_via_rpc 00:05:35.175 ************************************ 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58889 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58889 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58889 ']' 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.175 21:36:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.175 [2024-09-29 21:36:54.004776] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:35.175 [2024-09-29 21:36:54.004963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58889 ] 00:05:35.435 [2024-09-29 21:36:54.170173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.435 [2024-09-29 21:36:54.419079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58889 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58889 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58889 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58889 ']' 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58889 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58889 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58889' 00:05:36.817 killing process with pid 58889 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58889 00:05:36.817 21:36:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58889 00:05:40.113 ************************************ 00:05:40.113 END TEST default_locks_via_rpc 00:05:40.113 ************************************ 00:05:40.113 00:05:40.113 real 0m4.513s 00:05:40.113 user 0m4.242s 00:05:40.113 sys 0m0.790s 00:05:40.113 21:36:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.113 21:36:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.113 21:36:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.113 21:36:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.113 21:36:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.113 21:36:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.113 ************************************ 00:05:40.113 START TEST non_locking_app_on_locked_coremask 00:05:40.113 ************************************ 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58963 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58963 /var/tmp/spdk.sock 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58963 ']' 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.113 21:36:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.113 [2024-09-29 21:36:58.590602] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:40.113 [2024-09-29 21:36:58.590754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58963 ] 00:05:40.113 [2024-09-29 21:36:58.755816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.113 [2024-09-29 21:36:58.995962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.050 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.050 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:41.050 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58992 00:05:41.051 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.051 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58992 /var/tmp/spdk2.sock 00:05:41.051 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 58992 ']' 00:05:41.051 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.051 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.051 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.051 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.051 21:36:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.310 [2024-09-29 21:37:00.078159] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:41.310 [2024-09-29 21:37:00.078392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58992 ] 00:05:41.310 [2024-09-29 21:37:00.235341] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.310 [2024-09-29 21:37:00.235411] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.878 [2024-09-29 21:37:00.737510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.786 21:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.786 21:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:43.786 21:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58963 00:05:43.786 21:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.786 21:37:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58963 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58963 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58963 ']' 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58963 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58963 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:44.354 killing process with pid 58963 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58963' 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58963 00:05:44.354 21:37:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58963 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58992 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 58992 ']' 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 58992 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58992 00:05:49.632 killing process with pid 58992 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58992' 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 58992 00:05:49.632 21:37:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 58992 00:05:52.935 00:05:52.935 real 0m12.773s 00:05:52.935 user 0m12.630s 00:05:52.935 sys 0m1.670s 00:05:52.935 21:37:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.935 ************************************ 00:05:52.935 END TEST non_locking_app_on_locked_coremask 00:05:52.935 21:37:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.935 ************************************ 00:05:52.935 21:37:11 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:52.935 21:37:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.935 21:37:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.935 21:37:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.935 ************************************ 00:05:52.935 START TEST locking_app_on_unlocked_coremask 00:05:52.935 ************************************ 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59152 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59152 /var/tmp/spdk.sock 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:52.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59152 ']' 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.935 21:37:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.935 [2024-09-29 21:37:11.435069] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:52.935 [2024-09-29 21:37:11.435313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59152 ] 00:05:52.935 [2024-09-29 21:37:11.593132] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.935 [2024-09-29 21:37:11.593201] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.935 [2024-09-29 21:37:11.796671] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59168 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59168 /var/tmp/spdk2.sock 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59168 ']' 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.874 21:37:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.874 [2024-09-29 21:37:12.727518] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:53.874 [2024-09-29 21:37:12.727762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:05:54.134 [2024-09-29 21:37:12.885658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.394 [2024-09-29 21:37:13.306150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.304 21:37:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.304 21:37:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:56.304 21:37:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59168 00:05:56.304 21:37:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59168 00:05:56.304 21:37:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59152 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59152 ']' 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59152 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59152 00:05:57.685 killing process with pid 59152 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59152' 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59152 00:05:57.685 21:37:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59152 00:06:02.960 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59168 00:06:02.960 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59168 ']' 00:06:02.961 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59168 00:06:02.961 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.961 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.961 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59168 00:06:02.961 killing process with pid 59168 00:06:02.961 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.961 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.961 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59168' 00:06:02.961 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59168 00:06:02.961 21:37:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59168 00:06:05.502 ************************************ 00:06:05.502 END TEST locking_app_on_unlocked_coremask 00:06:05.502 ************************************ 00:06:05.502 00:06:05.502 real 0m12.923s 00:06:05.502 user 0m13.140s 00:06:05.502 sys 0m1.476s 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.502 21:37:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:05.502 21:37:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.502 21:37:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.502 21:37:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.502 ************************************ 00:06:05.502 START TEST locking_app_on_locked_coremask 00:06:05.502 ************************************ 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59331 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59331 /var/tmp/spdk.sock 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59331 ']' 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.502 21:37:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.502 [2024-09-29 21:37:24.421655] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:05.502 [2024-09-29 21:37:24.421848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59331 ] 00:06:05.765 [2024-09-29 21:37:24.585223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.027 [2024-09-29 21:37:24.832798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59353 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59353 /var/tmp/spdk2.sock 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59353 /var/tmp/spdk2.sock 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59353 /var/tmp/spdk2.sock 00:06:06.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59353 ']' 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.965 21:37:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.965 [2024-09-29 21:37:25.913133] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:06.965 [2024-09-29 21:37:25.913333] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:06:07.225 [2024-09-29 21:37:26.064569] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59331 has claimed it. 00:06:07.225 [2024-09-29 21:37:26.064639] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.793 ERROR: process (pid: 59353) is no longer running 00:06:07.793 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59353) - No such process 00:06:07.793 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.793 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:07.793 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:07.793 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.793 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.793 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.793 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59331 00:06:07.793 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59331 00:06:07.793 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.052 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59331 00:06:08.052 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59331 ']' 00:06:08.052 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59331 00:06:08.052 21:37:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:08.052 21:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.052 21:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59331 00:06:08.312 21:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.312 21:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.312 21:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59331' 00:06:08.312 killing process with pid 59331 00:06:08.312 21:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59331 00:06:08.312 21:37:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59331 00:06:10.850 00:06:10.850 real 0m5.363s 00:06:10.850 user 0m5.334s 00:06:10.850 sys 0m1.008s 00:06:10.850 21:37:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.850 21:37:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.850 ************************************ 00:06:10.850 END TEST locking_app_on_locked_coremask 00:06:10.850 ************************************ 00:06:10.850 21:37:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.850 21:37:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.850 21:37:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.850 21:37:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.850 ************************************ 00:06:10.850 START TEST locking_overlapped_coremask 00:06:10.850 ************************************ 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59428 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59428 /var/tmp/spdk.sock 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59428 ']' 00:06:10.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.850 21:37:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.109 [2024-09-29 21:37:29.868055] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:11.109 [2024-09-29 21:37:29.868200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59428 ] 00:06:11.109 [2024-09-29 21:37:30.037693] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.369 [2024-09-29 21:37:30.284957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.369 [2024-09-29 21:37:30.285120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.369 [2024-09-29 21:37:30.285173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59446 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59446 /var/tmp/spdk2.sock 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59446 /var/tmp/spdk2.sock 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59446 /var/tmp/spdk2.sock 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59446 ']' 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.309 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.569 [2024-09-29 21:37:31.381290] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:12.569 [2024-09-29 21:37:31.381516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59446 ] 00:06:12.569 [2024-09-29 21:37:31.543143] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59428 has claimed it. 00:06:12.569 [2024-09-29 21:37:31.543386] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:13.139 ERROR: process (pid: 59446) is no longer running 00:06:13.139 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59446) - No such process 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59428 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59428 ']' 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59428 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.139 21:37:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59428 00:06:13.139 21:37:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.139 21:37:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.139 21:37:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59428' 00:06:13.139 killing process with pid 59428 00:06:13.139 21:37:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59428 00:06:13.139 21:37:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59428 00:06:16.433 00:06:16.433 real 0m5.024s 00:06:16.433 user 0m12.976s 00:06:16.433 sys 0m0.811s 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.433 ************************************ 00:06:16.433 END TEST locking_overlapped_coremask 00:06:16.433 ************************************ 00:06:16.433 21:37:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:16.433 21:37:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.433 21:37:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.433 21:37:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.433 ************************************ 00:06:16.433 START TEST locking_overlapped_coremask_via_rpc 00:06:16.433 ************************************ 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59515 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59515 /var/tmp/spdk.sock 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59515 ']' 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.433 21:37:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.433 [2024-09-29 21:37:34.965814] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:16.433 [2024-09-29 21:37:34.966453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59515 ] 00:06:16.433 [2024-09-29 21:37:35.133424] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.433 [2024-09-29 21:37:35.133606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.433 [2024-09-29 21:37:35.377322] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.433 [2024-09-29 21:37:35.377460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.433 [2024-09-29 21:37:35.377510] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59539 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59539 /var/tmp/spdk2.sock 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59539 ']' 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.814 21:37:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.814 [2024-09-29 21:37:36.489453] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:17.814 [2024-09-29 21:37:36.489686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59539 ] 00:06:17.814 [2024-09-29 21:37:36.649592] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:17.814 [2024-09-29 21:37:36.649850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.383 [2024-09-29 21:37:37.177080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:18.383 [2024-09-29 21:37:37.179244] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.383 [2024-09-29 21:37:37.179290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.294 [2024-09-29 21:37:39.162209] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59515 has claimed it. 00:06:20.294 request: 00:06:20.294 { 00:06:20.294 "method": "framework_enable_cpumask_locks", 00:06:20.294 "req_id": 1 00:06:20.294 } 00:06:20.294 Got JSON-RPC error response 00:06:20.294 response: 00:06:20.294 { 00:06:20.294 "code": -32603, 00:06:20.294 "message": "Failed to claim CPU core: 2" 00:06:20.294 } 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59515 /var/tmp/spdk.sock 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59515 ']' 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.294 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.554 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.554 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:20.554 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59539 /var/tmp/spdk2.sock 00:06:20.554 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59539 ']' 00:06:20.554 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.554 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.554 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.554 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.554 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.814 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.814 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:20.814 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.814 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.814 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.814 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.814 00:06:20.814 real 0m4.723s 00:06:20.814 user 0m1.239s 00:06:20.814 sys 0m0.212s 00:06:20.814 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.814 21:37:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.814 ************************************ 00:06:20.814 END TEST locking_overlapped_coremask_via_rpc 00:06:20.814 ************************************ 00:06:20.814 21:37:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:20.814 21:37:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59515 ]] 00:06:20.814 21:37:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59515 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59515 ']' 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59515 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59515 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.814 killing process with pid 59515 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59515' 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59515 00:06:20.814 21:37:39 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59515 00:06:24.110 21:37:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59539 ]] 00:06:24.110 21:37:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59539 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59539 ']' 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59539 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59539 00:06:24.110 killing process with pid 59539 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59539' 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59539 00:06:24.110 21:37:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59539 00:06:26.689 21:37:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.689 Process with pid 59515 is not found 00:06:26.689 21:37:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:26.689 21:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59515 ]] 00:06:26.689 21:37:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59515 00:06:26.689 21:37:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59515 ']' 00:06:26.689 21:37:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59515 00:06:26.689 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59515) - No such process 00:06:26.689 21:37:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59515 is not found' 00:06:26.689 21:37:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59539 ]] 00:06:26.689 21:37:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59539 00:06:26.689 21:37:45 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59539 ']' 00:06:26.689 21:37:45 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59539 00:06:26.689 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59539) - No such process 00:06:26.689 21:37:45 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59539 is not found' 00:06:26.689 Process with pid 59539 is not found 00:06:26.689 21:37:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:26.689 00:06:26.689 real 0m56.331s 00:06:26.689 user 1m32.369s 00:06:26.689 sys 0m8.519s 00:06:26.689 21:37:45 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.689 21:37:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.689 ************************************ 00:06:26.689 END TEST cpu_locks 00:06:26.689 ************************************ 00:06:26.689 00:06:26.689 real 1m28.810s 00:06:26.689 user 2m32.728s 00:06:26.689 sys 0m12.855s 00:06:26.689 21:37:45 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.689 21:37:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.689 ************************************ 00:06:26.689 END TEST event 00:06:26.689 ************************************ 00:06:26.689 21:37:45 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.689 21:37:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.689 21:37:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.689 21:37:45 -- common/autotest_common.sh@10 -- # set +x 00:06:26.689 ************************************ 00:06:26.689 START TEST thread 00:06:26.689 ************************************ 00:06:26.689 21:37:45 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:26.689 * Looking for test storage... 00:06:26.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:26.689 21:37:45 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:26.689 21:37:45 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:26.689 21:37:45 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:26.689 21:37:45 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:26.689 21:37:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.689 21:37:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.689 21:37:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.689 21:37:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.689 21:37:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.689 21:37:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.689 21:37:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.689 21:37:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.689 21:37:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.689 21:37:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.689 21:37:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.689 21:37:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:26.689 21:37:45 thread -- scripts/common.sh@345 -- # : 1 00:06:26.689 21:37:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.689 21:37:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.689 21:37:45 thread -- scripts/common.sh@365 -- # decimal 1 00:06:26.689 21:37:45 thread -- scripts/common.sh@353 -- # local d=1 00:06:26.689 21:37:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.689 21:37:45 thread -- scripts/common.sh@355 -- # echo 1 00:06:26.689 21:37:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.689 21:37:45 thread -- scripts/common.sh@366 -- # decimal 2 00:06:26.689 21:37:45 thread -- scripts/common.sh@353 -- # local d=2 00:06:26.689 21:37:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.689 21:37:45 thread -- scripts/common.sh@355 -- # echo 2 00:06:26.689 21:37:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.690 21:37:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.690 21:37:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.690 21:37:45 thread -- scripts/common.sh@368 -- # return 0 00:06:26.690 21:37:45 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.690 21:37:45 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:26.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.690 --rc genhtml_branch_coverage=1 00:06:26.690 --rc genhtml_function_coverage=1 00:06:26.690 --rc genhtml_legend=1 00:06:26.690 --rc geninfo_all_blocks=1 00:06:26.690 --rc geninfo_unexecuted_blocks=1 00:06:26.690 00:06:26.690 ' 00:06:26.690 21:37:45 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:26.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.690 --rc genhtml_branch_coverage=1 00:06:26.690 --rc genhtml_function_coverage=1 00:06:26.690 --rc genhtml_legend=1 00:06:26.690 --rc geninfo_all_blocks=1 00:06:26.690 --rc geninfo_unexecuted_blocks=1 00:06:26.690 00:06:26.690 ' 00:06:26.690 21:37:45 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:26.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.690 --rc genhtml_branch_coverage=1 00:06:26.690 --rc genhtml_function_coverage=1 00:06:26.690 --rc genhtml_legend=1 00:06:26.690 --rc geninfo_all_blocks=1 00:06:26.690 --rc geninfo_unexecuted_blocks=1 00:06:26.690 00:06:26.690 ' 00:06:26.690 21:37:45 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:26.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.690 --rc genhtml_branch_coverage=1 00:06:26.690 --rc genhtml_function_coverage=1 00:06:26.690 --rc genhtml_legend=1 00:06:26.690 --rc geninfo_all_blocks=1 00:06:26.690 --rc geninfo_unexecuted_blocks=1 00:06:26.690 00:06:26.690 ' 00:06:26.690 21:37:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.690 21:37:45 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:26.690 21:37:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.690 21:37:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.690 ************************************ 00:06:26.690 START TEST thread_poller_perf 00:06:26.690 ************************************ 00:06:26.690 21:37:45 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:26.690 [2024-09-29 21:37:45.665256] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:26.690 [2024-09-29 21:37:45.665910] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:06:26.950 [2024-09-29 21:37:45.833299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.209 [2024-09-29 21:37:46.066540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.209 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:28.587 ====================================== 00:06:28.587 busy:2299277702 (cyc) 00:06:28.587 total_run_count: 427000 00:06:28.587 tsc_hz: 2290000000 (cyc) 00:06:28.587 ====================================== 00:06:28.587 poller_cost: 5384 (cyc), 2351 (nsec) 00:06:28.587 00:06:28.587 real 0m1.858s 00:06:28.587 user 0m1.614s 00:06:28.587 sys 0m0.136s 00:06:28.587 21:37:47 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.587 21:37:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:28.587 ************************************ 00:06:28.587 END TEST thread_poller_perf 00:06:28.588 ************************************ 00:06:28.588 21:37:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.588 21:37:47 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:28.588 21:37:47 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.588 21:37:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.588 ************************************ 00:06:28.588 START TEST thread_poller_perf 00:06:28.588 ************************************ 00:06:28.588 21:37:47 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:28.846 [2024-09-29 21:37:47.592149] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:28.846 [2024-09-29 21:37:47.592317] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59790 ] 00:06:28.846 [2024-09-29 21:37:47.758800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.104 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:29.104 [2024-09-29 21:37:48.000222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.484 ====================================== 00:06:30.484 busy:2293480684 (cyc) 00:06:30.484 total_run_count: 5619000 00:06:30.484 tsc_hz: 2290000000 (cyc) 00:06:30.484 ====================================== 00:06:30.484 poller_cost: 408 (cyc), 178 (nsec) 00:06:30.484 00:06:30.484 real 0m1.863s 00:06:30.484 user 0m1.615s 00:06:30.484 sys 0m0.141s 00:06:30.484 ************************************ 00:06:30.484 END TEST thread_poller_perf 00:06:30.484 ************************************ 00:06:30.484 21:37:49 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.484 21:37:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.484 21:37:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:30.484 ************************************ 00:06:30.484 END TEST thread 00:06:30.484 ************************************ 00:06:30.484 00:06:30.484 real 0m4.084s 00:06:30.484 user 0m3.410s 00:06:30.484 sys 0m0.475s 00:06:30.484 21:37:49 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.484 21:37:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.744 21:37:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:30.744 21:37:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:30.744 21:37:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:30.744 21:37:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.744 21:37:49 -- common/autotest_common.sh@10 -- # set +x 00:06:30.744 ************************************ 00:06:30.744 START TEST app_cmdline 00:06:30.744 ************************************ 00:06:30.744 21:37:49 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:30.744 * Looking for test storage... 00:06:30.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:30.744 21:37:49 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:30.744 21:37:49 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:30.744 21:37:49 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:31.004 21:37:49 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:31.004 21:37:49 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.004 21:37:49 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.004 21:37:49 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.005 21:37:49 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.005 --rc genhtml_branch_coverage=1 00:06:31.005 --rc genhtml_function_coverage=1 00:06:31.005 --rc genhtml_legend=1 00:06:31.005 --rc geninfo_all_blocks=1 00:06:31.005 --rc geninfo_unexecuted_blocks=1 00:06:31.005 00:06:31.005 ' 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.005 --rc genhtml_branch_coverage=1 00:06:31.005 --rc genhtml_function_coverage=1 00:06:31.005 --rc genhtml_legend=1 00:06:31.005 --rc geninfo_all_blocks=1 00:06:31.005 --rc geninfo_unexecuted_blocks=1 00:06:31.005 00:06:31.005 ' 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.005 --rc genhtml_branch_coverage=1 00:06:31.005 --rc genhtml_function_coverage=1 00:06:31.005 --rc genhtml_legend=1 00:06:31.005 --rc geninfo_all_blocks=1 00:06:31.005 --rc geninfo_unexecuted_blocks=1 00:06:31.005 00:06:31.005 ' 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.005 --rc genhtml_branch_coverage=1 00:06:31.005 --rc genhtml_function_coverage=1 00:06:31.005 --rc genhtml_legend=1 00:06:31.005 --rc geninfo_all_blocks=1 00:06:31.005 --rc geninfo_unexecuted_blocks=1 00:06:31.005 00:06:31.005 ' 00:06:31.005 21:37:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:31.005 21:37:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59879 00:06:31.005 21:37:49 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:31.005 21:37:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59879 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59879 ']' 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.005 21:37:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.005 [2024-09-29 21:37:49.862082] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:31.005 [2024-09-29 21:37:49.862202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59879 ] 00:06:31.264 [2024-09-29 21:37:50.037860] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.524 [2024-09-29 21:37:50.278315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.462 21:37:51 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.462 21:37:51 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:32.462 21:37:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:32.462 { 00:06:32.462 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:32.462 "fields": { 00:06:32.462 "major": 25, 00:06:32.462 "minor": 1, 00:06:32.462 "patch": 0, 00:06:32.462 "suffix": "-pre", 00:06:32.462 "commit": "09cc66129" 00:06:32.462 } 00:06:32.462 } 00:06:32.462 21:37:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:32.462 21:37:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:32.462 21:37:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:32.462 21:37:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:32.462 21:37:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:32.462 21:37:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:32.462 21:37:51 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.462 21:37:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.462 21:37:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:32.462 21:37:51 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.722 21:37:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:32.722 21:37:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:32.722 21:37:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.722 request: 00:06:32.722 { 00:06:32.722 "method": "env_dpdk_get_mem_stats", 00:06:32.722 "req_id": 1 00:06:32.722 } 00:06:32.722 Got JSON-RPC error response 00:06:32.722 response: 00:06:32.722 { 00:06:32.722 "code": -32601, 00:06:32.722 "message": "Method not found" 00:06:32.722 } 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.722 21:37:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59879 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59879 ']' 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59879 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59879 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.722 killing process with pid 59879 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59879' 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@969 -- # kill 59879 00:06:32.722 21:37:51 app_cmdline -- common/autotest_common.sh@974 -- # wait 59879 00:06:36.017 00:06:36.017 real 0m4.809s 00:06:36.017 user 0m4.744s 00:06:36.017 sys 0m0.799s 00:06:36.017 ************************************ 00:06:36.017 END TEST app_cmdline 00:06:36.017 ************************************ 00:06:36.017 21:37:54 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.017 21:37:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 21:37:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:36.017 21:37:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.017 21:37:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.017 21:37:54 -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 ************************************ 00:06:36.017 START TEST version 00:06:36.017 ************************************ 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:36.018 * Looking for test storage... 00:06:36.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:36.018 21:37:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.018 21:37:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.018 21:37:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.018 21:37:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.018 21:37:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.018 21:37:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.018 21:37:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.018 21:37:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.018 21:37:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.018 21:37:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.018 21:37:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.018 21:37:54 version -- scripts/common.sh@344 -- # case "$op" in 00:06:36.018 21:37:54 version -- scripts/common.sh@345 -- # : 1 00:06:36.018 21:37:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.018 21:37:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.018 21:37:54 version -- scripts/common.sh@365 -- # decimal 1 00:06:36.018 21:37:54 version -- scripts/common.sh@353 -- # local d=1 00:06:36.018 21:37:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.018 21:37:54 version -- scripts/common.sh@355 -- # echo 1 00:06:36.018 21:37:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.018 21:37:54 version -- scripts/common.sh@366 -- # decimal 2 00:06:36.018 21:37:54 version -- scripts/common.sh@353 -- # local d=2 00:06:36.018 21:37:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.018 21:37:54 version -- scripts/common.sh@355 -- # echo 2 00:06:36.018 21:37:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.018 21:37:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.018 21:37:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.018 21:37:54 version -- scripts/common.sh@368 -- # return 0 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:36.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.018 --rc genhtml_branch_coverage=1 00:06:36.018 --rc genhtml_function_coverage=1 00:06:36.018 --rc genhtml_legend=1 00:06:36.018 --rc geninfo_all_blocks=1 00:06:36.018 --rc geninfo_unexecuted_blocks=1 00:06:36.018 00:06:36.018 ' 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:36.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.018 --rc genhtml_branch_coverage=1 00:06:36.018 --rc genhtml_function_coverage=1 00:06:36.018 --rc genhtml_legend=1 00:06:36.018 --rc geninfo_all_blocks=1 00:06:36.018 --rc geninfo_unexecuted_blocks=1 00:06:36.018 00:06:36.018 ' 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:36.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.018 --rc genhtml_branch_coverage=1 00:06:36.018 --rc genhtml_function_coverage=1 00:06:36.018 --rc genhtml_legend=1 00:06:36.018 --rc geninfo_all_blocks=1 00:06:36.018 --rc geninfo_unexecuted_blocks=1 00:06:36.018 00:06:36.018 ' 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:36.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.018 --rc genhtml_branch_coverage=1 00:06:36.018 --rc genhtml_function_coverage=1 00:06:36.018 --rc genhtml_legend=1 00:06:36.018 --rc geninfo_all_blocks=1 00:06:36.018 --rc geninfo_unexecuted_blocks=1 00:06:36.018 00:06:36.018 ' 00:06:36.018 21:37:54 version -- app/version.sh@17 -- # get_header_version major 00:06:36.018 21:37:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.018 21:37:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.018 21:37:54 version -- app/version.sh@14 -- # cut -f2 00:06:36.018 21:37:54 version -- app/version.sh@17 -- # major=25 00:06:36.018 21:37:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:36.018 21:37:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.018 21:37:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.018 21:37:54 version -- app/version.sh@14 -- # cut -f2 00:06:36.018 21:37:54 version -- app/version.sh@18 -- # minor=1 00:06:36.018 21:37:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:36.018 21:37:54 version -- app/version.sh@14 -- # cut -f2 00:06:36.018 21:37:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.018 21:37:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.018 21:37:54 version -- app/version.sh@19 -- # patch=0 00:06:36.018 21:37:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:36.018 21:37:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:36.018 21:37:54 version -- app/version.sh@14 -- # cut -f2 00:06:36.018 21:37:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.018 21:37:54 version -- app/version.sh@20 -- # suffix=-pre 00:06:36.018 21:37:54 version -- app/version.sh@22 -- # version=25.1 00:06:36.018 21:37:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.018 21:37:54 version -- app/version.sh@28 -- # version=25.1rc0 00:06:36.018 21:37:54 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:36.018 21:37:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.018 21:37:54 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:36.018 21:37:54 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:36.018 00:06:36.018 real 0m0.327s 00:06:36.018 user 0m0.195s 00:06:36.018 sys 0m0.189s 00:06:36.018 ************************************ 00:06:36.018 END TEST version 00:06:36.018 ************************************ 00:06:36.018 21:37:54 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.018 21:37:54 version -- common/autotest_common.sh@10 -- # set +x 00:06:36.018 21:37:54 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:36.018 21:37:54 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:36.018 21:37:54 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:36.018 21:37:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.018 21:37:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.018 21:37:54 -- common/autotest_common.sh@10 -- # set +x 00:06:36.018 ************************************ 00:06:36.018 START TEST bdev_raid 00:06:36.018 ************************************ 00:06:36.018 21:37:54 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:36.018 * Looking for test storage... 00:06:36.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:36.018 21:37:54 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:36.018 21:37:54 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:36.018 21:37:54 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:36.018 21:37:54 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:36.019 21:37:54 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.279 21:37:55 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:36.279 21:37:55 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.279 21:37:55 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:36.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.279 --rc genhtml_branch_coverage=1 00:06:36.279 --rc genhtml_function_coverage=1 00:06:36.279 --rc genhtml_legend=1 00:06:36.279 --rc geninfo_all_blocks=1 00:06:36.279 --rc geninfo_unexecuted_blocks=1 00:06:36.279 00:06:36.279 ' 00:06:36.279 21:37:55 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:36.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.279 --rc genhtml_branch_coverage=1 00:06:36.279 --rc genhtml_function_coverage=1 00:06:36.279 --rc genhtml_legend=1 00:06:36.279 --rc geninfo_all_blocks=1 00:06:36.279 --rc geninfo_unexecuted_blocks=1 00:06:36.279 00:06:36.279 ' 00:06:36.279 21:37:55 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:36.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.279 --rc genhtml_branch_coverage=1 00:06:36.279 --rc genhtml_function_coverage=1 00:06:36.279 --rc genhtml_legend=1 00:06:36.279 --rc geninfo_all_blocks=1 00:06:36.279 --rc geninfo_unexecuted_blocks=1 00:06:36.279 00:06:36.279 ' 00:06:36.279 21:37:55 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:36.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.279 --rc genhtml_branch_coverage=1 00:06:36.279 --rc genhtml_function_coverage=1 00:06:36.279 --rc genhtml_legend=1 00:06:36.279 --rc geninfo_all_blocks=1 00:06:36.279 --rc geninfo_unexecuted_blocks=1 00:06:36.279 00:06:36.279 ' 00:06:36.279 21:37:55 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:36.279 21:37:55 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:36.279 21:37:55 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:36.279 21:37:55 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:36.279 21:37:55 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:36.279 21:37:55 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:36.279 21:37:55 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:36.279 21:37:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.279 21:37:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.279 21:37:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.279 ************************************ 00:06:36.279 START TEST raid1_resize_data_offset_test 00:06:36.279 ************************************ 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60072 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60072' 00:06:36.279 Process raid pid: 60072 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60072 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60072 ']' 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.279 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.279 [2024-09-29 21:37:55.134217] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:36.279 [2024-09-29 21:37:55.134328] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.539 [2024-09-29 21:37:55.300069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.798 [2024-09-29 21:37:55.547202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.798 [2024-09-29 21:37:55.778848] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.798 [2024-09-29 21:37:55.778882] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.057 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.057 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:37.057 21:37:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:37.057 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.057 21:37:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 malloc0 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 malloc1 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 null0 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 [2024-09-29 21:37:56.165471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:37.317 [2024-09-29 21:37:56.167416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:37.317 [2024-09-29 21:37:56.167462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:37.317 [2024-09-29 21:37:56.167604] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:37.317 [2024-09-29 21:37:56.167615] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:37.317 [2024-09-29 21:37:56.167881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:37.317 [2024-09-29 21:37:56.168051] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:37.317 [2024-09-29 21:37:56.168066] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:37.317 [2024-09-29 21:37:56.168245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.317 [2024-09-29 21:37:56.225307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.317 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.887 malloc2 00:06:37.887 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.887 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:37.887 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.887 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.887 [2024-09-29 21:37:56.841608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:37.887 [2024-09-29 21:37:56.856103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:37.887 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.887 [2024-09-29 21:37:56.858162] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:37.887 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.887 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:37.887 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.887 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60072 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60072 ']' 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60072 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60072 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.147 killing process with pid 60072 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60072' 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60072 00:06:38.147 21:37:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60072 00:06:38.147 [2024-09-29 21:37:56.952258] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.147 [2024-09-29 21:37:56.953753] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:38.147 [2024-09-29 21:37:56.953817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.147 [2024-09-29 21:37:56.953834] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:38.147 [2024-09-29 21:37:56.981357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.147 [2024-09-29 21:37:56.981688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.147 [2024-09-29 21:37:56.981708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:40.056 [2024-09-29 21:37:58.858333] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.448 21:38:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:41.448 00:06:41.448 real 0m5.161s 00:06:41.448 user 0m4.846s 00:06:41.448 sys 0m0.742s 00:06:41.448 21:38:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.448 21:38:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.448 ************************************ 00:06:41.448 END TEST raid1_resize_data_offset_test 00:06:41.448 ************************************ 00:06:41.448 21:38:00 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:41.448 21:38:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:41.448 21:38:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.448 21:38:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.448 ************************************ 00:06:41.448 START TEST raid0_resize_superblock_test 00:06:41.449 ************************************ 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:41.449 Process raid pid: 60156 00:06:41.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60156 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60156' 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60156 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60156 ']' 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.449 21:38:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.449 [2024-09-29 21:38:00.365744] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:41.449 [2024-09-29 21:38:00.365874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.708 [2024-09-29 21:38:00.530863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.968 [2024-09-29 21:38:00.785694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.228 [2024-09-29 21:38:01.021625] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.228 [2024-09-29 21:38:01.021662] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.228 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.228 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:42.228 21:38:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:42.228 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.228 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.167 malloc0 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.167 [2024-09-29 21:38:01.817860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:43.167 [2024-09-29 21:38:01.817956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.167 [2024-09-29 21:38:01.817982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:43.167 [2024-09-29 21:38:01.817994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.167 [2024-09-29 21:38:01.820414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.167 [2024-09-29 21:38:01.820454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:43.167 pt0 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.167 df40b90c-1ad7-4a49-bfd3-9c2c5a32883b 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.167 21:38:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.167 a3ea509c-17a0-4bdc-a21d-5a5d8604eadb 00:06:43.167 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.167 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:43.167 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.167 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.167 63d3dc42-7d1f-40fd-9f60-e8ed8fa253ee 00:06:43.167 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.167 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:43.167 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:43.167 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.168 [2024-09-29 21:38:02.027653] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev a3ea509c-17a0-4bdc-a21d-5a5d8604eadb is claimed 00:06:43.168 [2024-09-29 21:38:02.027752] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 63d3dc42-7d1f-40fd-9f60-e8ed8fa253ee is claimed 00:06:43.168 [2024-09-29 21:38:02.027894] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:43.168 [2024-09-29 21:38:02.027915] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:43.168 [2024-09-29 21:38:02.028196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:43.168 [2024-09-29 21:38:02.028396] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:43.168 [2024-09-29 21:38:02.028408] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:43.168 [2024-09-29 21:38:02.028557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.168 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.168 [2024-09-29 21:38:02.143597] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 [2024-09-29 21:38:02.187483] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.428 [2024-09-29 21:38:02.187507] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a3ea509c-17a0-4bdc-a21d-5a5d8604eadb' was resized: old size 131072, new size 204800 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 [2024-09-29 21:38:02.199415] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:43.428 [2024-09-29 21:38:02.199437] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '63d3dc42-7d1f-40fd-9f60-e8ed8fa253ee' was resized: old size 131072, new size 204800 00:06:43.428 [2024-09-29 21:38:02.199463] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 [2024-09-29 21:38:02.295348] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 [2024-09-29 21:38:02.339078] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:43.428 [2024-09-29 21:38:02.339179] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:43.428 [2024-09-29 21:38:02.339209] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.428 [2024-09-29 21:38:02.339244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:43.428 [2024-09-29 21:38:02.339356] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.428 [2024-09-29 21:38:02.339419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.428 [2024-09-29 21:38:02.339464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.428 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 [2024-09-29 21:38:02.351019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:43.428 [2024-09-29 21:38:02.351085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.428 [2024-09-29 21:38:02.351105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:43.428 [2024-09-29 21:38:02.351116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.428 [2024-09-29 21:38:02.353457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.428 [2024-09-29 21:38:02.353530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:43.428 [2024-09-29 21:38:02.355142] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a3ea509c-17a0-4bdc-a21d-5a5d8604eadb 00:06:43.428 [2024-09-29 21:38:02.355208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev a3ea509c-17a0-4bdc-a21d-5a5d8604eadb is claimed 00:06:43.428 [2024-09-29 21:38:02.355326] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 63d3dc42-7d1f-40fd-9f60-e8ed8fa253ee 00:06:43.428 [2024-09-29 21:38:02.355344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 63d3dc42-7d1f-40fd-9f60-e8ed8fa253ee is claimed 00:06:43.428 [2024-09-29 21:38:02.355477] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 63d3dc42-7d1f-40fd-9f60-e8ed8fa253ee (2) smaller than existing raid bdev Raid (3) 00:06:43.428 [2024-09-29 21:38:02.355500] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev a3ea509c-17a0-4bdc-a21d-5a5d8604eadb: File exists 00:06:43.428 [2024-09-29 21:38:02.355535] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:43.428 [2024-09-29 21:38:02.355548] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:43.428 [2024-09-29 21:38:02.355794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:43.428 [2024-09-29 21:38:02.355935] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:43.428 [2024-09-29 21:38:02.355942] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:43.428 pt0 00:06:43.429 [2024-09-29 21:38:02.356100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:43.429 [2024-09-29 21:38:02.375403] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.429 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60156 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60156 ']' 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60156 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60156 00:06:43.689 killing process with pid 60156 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60156' 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60156 00:06:43.689 [2024-09-29 21:38:02.468137] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.689 [2024-09-29 21:38:02.468188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.689 [2024-09-29 21:38:02.468227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.689 [2024-09-29 21:38:02.468236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:43.689 21:38:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60156 00:06:45.069 [2024-09-29 21:38:03.996530] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.451 21:38:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:46.451 00:06:46.451 real 0m5.073s 00:06:46.451 user 0m5.055s 00:06:46.451 sys 0m0.786s 00:06:46.451 ************************************ 00:06:46.451 END TEST raid0_resize_superblock_test 00:06:46.451 ************************************ 00:06:46.451 21:38:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.451 21:38:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.451 21:38:05 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:46.451 21:38:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:46.451 21:38:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.451 21:38:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.451 ************************************ 00:06:46.451 START TEST raid1_resize_superblock_test 00:06:46.451 ************************************ 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60260 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60260' 00:06:46.451 Process raid pid: 60260 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60260 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60260 ']' 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.451 21:38:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.711 [2024-09-29 21:38:05.517132] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:46.711 [2024-09-29 21:38:05.517338] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.711 [2024-09-29 21:38:05.685945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.971 [2024-09-29 21:38:05.937163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.231 [2024-09-29 21:38:06.173140] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.231 [2024-09-29 21:38:06.173272] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.491 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.491 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:47.491 21:38:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:47.491 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.491 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.061 malloc0 00:06:48.061 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.061 21:38:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:48.061 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.061 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.061 [2024-09-29 21:38:06.955517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:48.061 [2024-09-29 21:38:06.955608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.061 [2024-09-29 21:38:06.955633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:48.061 [2024-09-29 21:38:06.955645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.061 [2024-09-29 21:38:06.957979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.061 [2024-09-29 21:38:06.958022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:48.061 pt0 00:06:48.061 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.061 21:38:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:48.061 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.061 21:38:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 f3057b68-9238-488b-83ce-4b65f6639255 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 63831051-48bf-4776-9dc7-b987f4c703a7 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 aace6234-4f59-4b58-a151-257b7eb407e3 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 [2024-09-29 21:38:07.166200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 63831051-48bf-4776-9dc7-b987f4c703a7 is claimed 00:06:48.321 [2024-09-29 21:38:07.166310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev aace6234-4f59-4b58-a151-257b7eb407e3 is claimed 00:06:48.321 [2024-09-29 21:38:07.166438] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:48.321 [2024-09-29 21:38:07.166455] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:48.321 [2024-09-29 21:38:07.166705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:48.321 [2024-09-29 21:38:07.166889] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:48.321 [2024-09-29 21:38:07.166900] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:48.321 [2024-09-29 21:38:07.167072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.321 [2024-09-29 21:38:07.278172] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:48.321 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.581 [2024-09-29 21:38:07.326013] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:48.581 [2024-09-29 21:38:07.326040] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '63831051-48bf-4776-9dc7-b987f4c703a7' was resized: old size 131072, new size 204800 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.581 [2024-09-29 21:38:07.337965] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:48.581 [2024-09-29 21:38:07.337988] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'aace6234-4f59-4b58-a151-257b7eb407e3' was resized: old size 131072, new size 204800 00:06:48.581 [2024-09-29 21:38:07.338015] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:48.581 [2024-09-29 21:38:07.453861] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.581 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.581 [2024-09-29 21:38:07.505562] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:48.581 [2024-09-29 21:38:07.505624] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:48.581 [2024-09-29 21:38:07.505660] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:48.581 [2024-09-29 21:38:07.505781] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:48.581 [2024-09-29 21:38:07.505930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.582 [2024-09-29 21:38:07.505991] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.582 [2024-09-29 21:38:07.506007] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.582 [2024-09-29 21:38:07.517506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:48.582 [2024-09-29 21:38:07.517575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:48.582 [2024-09-29 21:38:07.517595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:48.582 [2024-09-29 21:38:07.517616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:48.582 [2024-09-29 21:38:07.519903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:48.582 [2024-09-29 21:38:07.519940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:48.582 [2024-09-29 21:38:07.521534] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 63831051-48bf-4776-9dc7-b987f4c703a7 00:06:48.582 [2024-09-29 21:38:07.521592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 63831051-48bf-4776-9dc7-b987f4c703a7 is claimed 00:06:48.582 [2024-09-29 21:38:07.521694] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev aace6234-4f59-4b58-a151-257b7eb407e3 00:06:48.582 [2024-09-29 21:38:07.521713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev aace6234-4f59-4b58-a151-257b7eb407e3 is claimed 00:06:48.582 [2024-09-29 21:38:07.521859] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev aace6234-4f59-4b58-a151-257b7eb407e3 (2) smaller than existing raid bdev Raid (3) 00:06:48.582 [2024-09-29 21:38:07.521881] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 63831051-48bf-4776-9dc7-b987f4c703a7: File exists 00:06:48.582 [2024-09-29 21:38:07.521916] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:48.582 [2024-09-29 21:38:07.521944] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:48.582 [2024-09-29 21:38:07.522198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:48.582 pt0 00:06:48.582 [2024-09-29 21:38:07.522352] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:48.582 [2024-09-29 21:38:07.522361] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:48.582 [2024-09-29 21:38:07.522499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.582 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.582 [2024-09-29 21:38:07.545789] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60260 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60260 ']' 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60260 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60260 00:06:48.842 killing process with pid 60260 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60260' 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60260 00:06:48.842 [2024-09-29 21:38:07.624358] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.842 [2024-09-29 21:38:07.624418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.842 [2024-09-29 21:38:07.624460] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.842 [2024-09-29 21:38:07.624468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:48.842 21:38:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60260 00:06:50.229 [2024-09-29 21:38:09.141285] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:51.612 21:38:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:51.612 00:06:51.612 real 0m5.052s 00:06:51.612 user 0m5.043s 00:06:51.612 sys 0m0.795s 00:06:51.612 21:38:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.612 21:38:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.612 ************************************ 00:06:51.612 END TEST raid1_resize_superblock_test 00:06:51.612 ************************************ 00:06:51.612 21:38:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:51.612 21:38:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:51.612 21:38:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:51.612 21:38:10 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:51.612 21:38:10 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:51.612 21:38:10 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:51.612 21:38:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:51.612 21:38:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.612 21:38:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:51.612 ************************************ 00:06:51.612 START TEST raid_function_test_raid0 00:06:51.612 ************************************ 00:06:51.612 21:38:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:51.612 21:38:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:51.612 21:38:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:51.612 21:38:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:51.612 21:38:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60368 00:06:51.612 21:38:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:51.612 21:38:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60368' 00:06:51.612 Process raid pid: 60368 00:06:51.613 21:38:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60368 00:06:51.613 21:38:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60368 ']' 00:06:51.613 21:38:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.613 21:38:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.613 21:38:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.613 21:38:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.613 21:38:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:51.872 [2024-09-29 21:38:10.664295] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:51.872 [2024-09-29 21:38:10.664517] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.872 [2024-09-29 21:38:10.833823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.132 [2024-09-29 21:38:11.086467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.392 [2024-09-29 21:38:11.326470] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.392 [2024-09-29 21:38:11.326609] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:52.652 Base_1 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:52.652 Base_2 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:52.652 [2024-09-29 21:38:11.599702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:52.652 [2024-09-29 21:38:11.601717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:52.652 [2024-09-29 21:38:11.601788] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:52.652 [2024-09-29 21:38:11.601801] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:52.652 [2024-09-29 21:38:11.602063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:52.652 [2024-09-29 21:38:11.602208] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:52.652 [2024-09-29 21:38:11.602217] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:52.652 [2024-09-29 21:38:11.602378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:52.652 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:52.911 [2024-09-29 21:38:11.839256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:52.911 /dev/nbd0 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:52.911 1+0 records in 00:06:52.911 1+0 records out 00:06:52.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405771 s, 10.1 MB/s 00:06:52.911 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.170 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:53.170 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:53.170 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:53.170 21:38:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:53.170 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.170 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:53.170 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:53.170 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:53.170 21:38:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:53.170 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.170 { 00:06:53.170 "nbd_device": "/dev/nbd0", 00:06:53.170 "bdev_name": "raid" 00:06:53.170 } 00:06:53.170 ]' 00:06:53.170 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.170 { 00:06:53.170 "nbd_device": "/dev/nbd0", 00:06:53.170 "bdev_name": "raid" 00:06:53.170 } 00:06:53.170 ]' 00:06:53.170 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:53.430 4096+0 records in 00:06:53.430 4096+0 records out 00:06:53.430 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0337887 s, 62.1 MB/s 00:06:53.430 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:53.691 4096+0 records in 00:06:53.691 4096+0 records out 00:06:53.691 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.226766 s, 9.2 MB/s 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:53.691 128+0 records in 00:06:53.691 128+0 records out 00:06:53.691 65536 bytes (66 kB, 64 KiB) copied, 0.00113834 s, 57.6 MB/s 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:53.691 2035+0 records in 00:06:53.691 2035+0 records out 00:06:53.691 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0151918 s, 68.6 MB/s 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:53.691 456+0 records in 00:06:53.691 456+0 records out 00:06:53.691 233472 bytes (233 kB, 228 KiB) copied, 0.00398609 s, 58.6 MB/s 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.691 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.951 [2024-09-29 21:38:12.784531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:53.951 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:54.211 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.211 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.211 21:38:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60368 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60368 ']' 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60368 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60368 00:06:54.211 killing process with pid 60368 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60368' 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60368 00:06:54.211 [2024-09-29 21:38:13.079660] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.211 [2024-09-29 21:38:13.079773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.211 [2024-09-29 21:38:13.079824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.211 [2024-09-29 21:38:13.079837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:54.211 21:38:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60368 00:06:54.471 [2024-09-29 21:38:13.294667] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.851 21:38:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:55.851 00:06:55.851 real 0m4.053s 00:06:55.851 user 0m4.407s 00:06:55.851 sys 0m1.168s 00:06:55.851 21:38:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.851 21:38:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:55.851 ************************************ 00:06:55.851 END TEST raid_function_test_raid0 00:06:55.851 ************************************ 00:06:55.851 21:38:14 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:55.851 21:38:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:55.851 21:38:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.851 21:38:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.851 ************************************ 00:06:55.851 START TEST raid_function_test_concat 00:06:55.851 ************************************ 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60497 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60497' 00:06:55.851 Process raid pid: 60497 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60497 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60497 ']' 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.851 21:38:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:55.851 [2024-09-29 21:38:14.788861] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:55.851 [2024-09-29 21:38:14.789142] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.110 [2024-09-29 21:38:14.955967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.370 [2024-09-29 21:38:15.196988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.629 [2024-09-29 21:38:15.429128] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.630 [2024-09-29 21:38:15.429233] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.630 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.630 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:56.630 21:38:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:56.630 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.630 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:56.889 Base_1 00:06:56.889 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.889 21:38:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:56.890 Base_2 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:56.890 [2024-09-29 21:38:15.738755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.890 [2024-09-29 21:38:15.740734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.890 [2024-09-29 21:38:15.740803] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.890 [2024-09-29 21:38:15.740815] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:56.890 [2024-09-29 21:38:15.741069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:56.890 [2024-09-29 21:38:15.741204] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.890 [2024-09-29 21:38:15.741212] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:56.890 [2024-09-29 21:38:15.741361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:56.890 21:38:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:57.149 [2024-09-29 21:38:15.974374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:57.149 /dev/nbd0 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.149 1+0 records in 00:06:57.149 1+0 records out 00:06:57.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261641 s, 15.7 MB/s 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:57.149 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.150 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.150 21:38:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:57.150 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.150 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:57.150 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:57.150 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.150 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.409 { 00:06:57.409 "nbd_device": "/dev/nbd0", 00:06:57.409 "bdev_name": "raid" 00:06:57.409 } 00:06:57.409 ]' 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.409 { 00:06:57.409 "nbd_device": "/dev/nbd0", 00:06:57.409 "bdev_name": "raid" 00:06:57.409 } 00:06:57.409 ]' 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:57.409 4096+0 records in 00:06:57.409 4096+0 records out 00:06:57.409 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.020798 s, 101 MB/s 00:06:57.409 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:57.669 4096+0 records in 00:06:57.669 4096+0 records out 00:06:57.669 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.237096 s, 8.8 MB/s 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:57.669 128+0 records in 00:06:57.669 128+0 records out 00:06:57.669 65536 bytes (66 kB, 64 KiB) copied, 0.000605575 s, 108 MB/s 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:57.669 2035+0 records in 00:06:57.669 2035+0 records out 00:06:57.669 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0137502 s, 75.8 MB/s 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:57.669 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:57.929 456+0 records in 00:06:57.929 456+0 records out 00:06:57.929 233472 bytes (233 kB, 228 KiB) copied, 0.00382327 s, 61.1 MB/s 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:57.929 [2024-09-29 21:38:16.884466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:57.929 21:38:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60497 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60497 ']' 00:06:58.189 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60497 00:06:58.448 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:58.448 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.448 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60497 00:06:58.448 killing process with pid 60497 00:06:58.448 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.448 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.448 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60497' 00:06:58.448 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60497 00:06:58.448 [2024-09-29 21:38:17.211497] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.448 21:38:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60497 00:06:58.448 [2024-09-29 21:38:17.211627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.448 [2024-09-29 21:38:17.211683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.448 [2024-09-29 21:38:17.211696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:58.448 [2024-09-29 21:38:17.425473] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:59.843 21:38:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:59.843 00:06:59.843 real 0m4.065s 00:06:59.843 user 0m4.483s 00:06:59.843 sys 0m1.077s 00:06:59.843 21:38:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.843 ************************************ 00:06:59.843 END TEST raid_function_test_concat 00:06:59.843 ************************************ 00:06:59.843 21:38:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:59.843 21:38:18 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:59.843 21:38:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.843 21:38:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.843 21:38:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.104 ************************************ 00:07:00.104 START TEST raid0_resize_test 00:07:00.104 ************************************ 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60625 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60625' 00:07:00.104 Process raid pid: 60625 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60625 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60625 ']' 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.104 21:38:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.104 [2024-09-29 21:38:18.921184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:00.104 [2024-09-29 21:38:18.921377] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.364 [2024-09-29 21:38:19.091649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.364 [2024-09-29 21:38:19.331585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.624 [2024-09-29 21:38:19.562837] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.624 [2024-09-29 21:38:19.562871] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.885 Base_1 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.885 Base_2 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.885 [2024-09-29 21:38:19.776293] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:00.885 [2024-09-29 21:38:19.778258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:00.885 [2024-09-29 21:38:19.778313] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.885 [2024-09-29 21:38:19.778324] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.885 [2024-09-29 21:38:19.778535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:00.885 [2024-09-29 21:38:19.778658] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.885 [2024-09-29 21:38:19.778670] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:00.885 [2024-09-29 21:38:19.778792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.885 [2024-09-29 21:38:19.788210] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.885 [2024-09-29 21:38:19.788308] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:00.885 true 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.885 [2024-09-29 21:38:19.804347] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.885 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.886 [2024-09-29 21:38:19.852148] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:00.886 [2024-09-29 21:38:19.852207] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:00.886 [2024-09-29 21:38:19.852259] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:00.886 true 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.886 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.145 [2024-09-29 21:38:19.868272] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60625 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60625 ']' 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60625 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60625 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60625' 00:07:01.145 killing process with pid 60625 00:07:01.145 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60625 00:07:01.145 [2024-09-29 21:38:19.939904] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.145 [2024-09-29 21:38:19.940046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.145 [2024-09-29 21:38:19.940120] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.146 21:38:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60625 00:07:01.146 [2024-09-29 21:38:19.940166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:01.146 [2024-09-29 21:38:19.957540] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.528 21:38:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:02.528 00:07:02.528 real 0m2.452s 00:07:02.528 user 0m2.473s 00:07:02.528 sys 0m0.440s 00:07:02.528 21:38:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.528 21:38:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.528 ************************************ 00:07:02.528 END TEST raid0_resize_test 00:07:02.528 ************************************ 00:07:02.528 21:38:21 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:02.528 21:38:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.528 21:38:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.528 21:38:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.528 ************************************ 00:07:02.528 START TEST raid1_resize_test 00:07:02.528 ************************************ 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60687 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60687' 00:07:02.528 Process raid pid: 60687 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60687 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60687 ']' 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.528 21:38:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.528 [2024-09-29 21:38:21.447162] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:02.528 [2024-09-29 21:38:21.447281] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.789 [2024-09-29 21:38:21.615606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.049 [2024-09-29 21:38:21.862849] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.309 [2024-09-29 21:38:22.099967] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.309 [2024-09-29 21:38:22.100008] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.309 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.309 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:03.309 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:03.309 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.309 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.569 Base_1 00:07:03.569 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.569 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:03.569 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.569 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.569 Base_2 00:07:03.569 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.569 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:03.569 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:03.569 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.569 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 [2024-09-29 21:38:22.313573] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:03.570 [2024-09-29 21:38:22.315626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:03.570 [2024-09-29 21:38:22.315732] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:03.570 [2024-09-29 21:38:22.315749] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:03.570 [2024-09-29 21:38:22.316006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:03.570 [2024-09-29 21:38:22.316168] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:03.570 [2024-09-29 21:38:22.316187] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:03.570 [2024-09-29 21:38:22.316331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 [2024-09-29 21:38:22.325499] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.570 [2024-09-29 21:38:22.325530] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:03.570 true 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 [2024-09-29 21:38:22.341612] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 [2024-09-29 21:38:22.385387] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:03.570 [2024-09-29 21:38:22.385449] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:03.570 [2024-09-29 21:38:22.385503] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:03.570 true 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.570 [2024-09-29 21:38:22.401500] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60687 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60687 ']' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60687 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60687 00:07:03.570 killing process with pid 60687 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60687' 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60687 00:07:03.570 [2024-09-29 21:38:22.473007] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.570 [2024-09-29 21:38:22.473089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.570 21:38:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60687 00:07:03.570 [2024-09-29 21:38:22.473557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.570 [2024-09-29 21:38:22.473579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:03.570 [2024-09-29 21:38:22.490471] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.953 21:38:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:04.953 00:07:04.953 real 0m2.454s 00:07:04.953 user 0m2.496s 00:07:04.953 sys 0m0.432s 00:07:04.953 21:38:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.953 21:38:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.953 ************************************ 00:07:04.953 END TEST raid1_resize_test 00:07:04.953 ************************************ 00:07:04.953 21:38:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:04.953 21:38:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:04.953 21:38:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:04.953 21:38:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:04.953 21:38:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.953 21:38:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.953 ************************************ 00:07:04.953 START TEST raid_state_function_test 00:07:04.953 ************************************ 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60748 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60748' 00:07:04.953 Process raid pid: 60748 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60748 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60748 ']' 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.953 21:38:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.213 [2024-09-29 21:38:23.978375] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:05.213 [2024-09-29 21:38:23.978512] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.213 [2024-09-29 21:38:24.142432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.473 [2024-09-29 21:38:24.386900] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.733 [2024-09-29 21:38:24.623766] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.733 [2024-09-29 21:38:24.623803] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.993 [2024-09-29 21:38:24.814106] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.993 [2024-09-29 21:38:24.814165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.993 [2024-09-29 21:38:24.814175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.993 [2024-09-29 21:38:24.814184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.993 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.994 "name": "Existed_Raid", 00:07:05.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.994 "strip_size_kb": 64, 00:07:05.994 "state": "configuring", 00:07:05.994 "raid_level": "raid0", 00:07:05.994 "superblock": false, 00:07:05.994 "num_base_bdevs": 2, 00:07:05.994 "num_base_bdevs_discovered": 0, 00:07:05.994 "num_base_bdevs_operational": 2, 00:07:05.994 "base_bdevs_list": [ 00:07:05.994 { 00:07:05.994 "name": "BaseBdev1", 00:07:05.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.994 "is_configured": false, 00:07:05.994 "data_offset": 0, 00:07:05.994 "data_size": 0 00:07:05.994 }, 00:07:05.994 { 00:07:05.994 "name": "BaseBdev2", 00:07:05.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.994 "is_configured": false, 00:07:05.994 "data_offset": 0, 00:07:05.994 "data_size": 0 00:07:05.994 } 00:07:05.994 ] 00:07:05.994 }' 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.994 21:38:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.564 [2024-09-29 21:38:25.273219] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:06.564 [2024-09-29 21:38:25.273323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.564 [2024-09-29 21:38:25.285213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:06.564 [2024-09-29 21:38:25.285309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:06.564 [2024-09-29 21:38:25.285336] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:06.564 [2024-09-29 21:38:25.285362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.564 [2024-09-29 21:38:25.349004] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:06.564 BaseBdev1 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.564 [ 00:07:06.564 { 00:07:06.564 "name": "BaseBdev1", 00:07:06.564 "aliases": [ 00:07:06.564 "6873c647-f692-463e-a494-4b580e1218fd" 00:07:06.564 ], 00:07:06.564 "product_name": "Malloc disk", 00:07:06.564 "block_size": 512, 00:07:06.564 "num_blocks": 65536, 00:07:06.564 "uuid": "6873c647-f692-463e-a494-4b580e1218fd", 00:07:06.564 "assigned_rate_limits": { 00:07:06.564 "rw_ios_per_sec": 0, 00:07:06.564 "rw_mbytes_per_sec": 0, 00:07:06.564 "r_mbytes_per_sec": 0, 00:07:06.564 "w_mbytes_per_sec": 0 00:07:06.564 }, 00:07:06.564 "claimed": true, 00:07:06.564 "claim_type": "exclusive_write", 00:07:06.564 "zoned": false, 00:07:06.564 "supported_io_types": { 00:07:06.564 "read": true, 00:07:06.564 "write": true, 00:07:06.564 "unmap": true, 00:07:06.564 "flush": true, 00:07:06.564 "reset": true, 00:07:06.564 "nvme_admin": false, 00:07:06.564 "nvme_io": false, 00:07:06.564 "nvme_io_md": false, 00:07:06.564 "write_zeroes": true, 00:07:06.564 "zcopy": true, 00:07:06.564 "get_zone_info": false, 00:07:06.564 "zone_management": false, 00:07:06.564 "zone_append": false, 00:07:06.564 "compare": false, 00:07:06.564 "compare_and_write": false, 00:07:06.564 "abort": true, 00:07:06.564 "seek_hole": false, 00:07:06.564 "seek_data": false, 00:07:06.564 "copy": true, 00:07:06.564 "nvme_iov_md": false 00:07:06.564 }, 00:07:06.564 "memory_domains": [ 00:07:06.564 { 00:07:06.564 "dma_device_id": "system", 00:07:06.564 "dma_device_type": 1 00:07:06.564 }, 00:07:06.564 { 00:07:06.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.564 "dma_device_type": 2 00:07:06.564 } 00:07:06.564 ], 00:07:06.564 "driver_specific": {} 00:07:06.564 } 00:07:06.564 ] 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.564 "name": "Existed_Raid", 00:07:06.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.564 "strip_size_kb": 64, 00:07:06.564 "state": "configuring", 00:07:06.564 "raid_level": "raid0", 00:07:06.564 "superblock": false, 00:07:06.564 "num_base_bdevs": 2, 00:07:06.564 "num_base_bdevs_discovered": 1, 00:07:06.564 "num_base_bdevs_operational": 2, 00:07:06.564 "base_bdevs_list": [ 00:07:06.564 { 00:07:06.564 "name": "BaseBdev1", 00:07:06.564 "uuid": "6873c647-f692-463e-a494-4b580e1218fd", 00:07:06.564 "is_configured": true, 00:07:06.564 "data_offset": 0, 00:07:06.564 "data_size": 65536 00:07:06.564 }, 00:07:06.564 { 00:07:06.564 "name": "BaseBdev2", 00:07:06.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.564 "is_configured": false, 00:07:06.564 "data_offset": 0, 00:07:06.564 "data_size": 0 00:07:06.564 } 00:07:06.564 ] 00:07:06.564 }' 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.564 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 [2024-09-29 21:38:25.828323] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:07.135 [2024-09-29 21:38:25.828394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 [2024-09-29 21:38:25.840356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.135 [2024-09-29 21:38:25.842377] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.135 [2024-09-29 21:38:25.842418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.135 "name": "Existed_Raid", 00:07:07.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.135 "strip_size_kb": 64, 00:07:07.135 "state": "configuring", 00:07:07.135 "raid_level": "raid0", 00:07:07.135 "superblock": false, 00:07:07.135 "num_base_bdevs": 2, 00:07:07.135 "num_base_bdevs_discovered": 1, 00:07:07.135 "num_base_bdevs_operational": 2, 00:07:07.135 "base_bdevs_list": [ 00:07:07.135 { 00:07:07.135 "name": "BaseBdev1", 00:07:07.135 "uuid": "6873c647-f692-463e-a494-4b580e1218fd", 00:07:07.135 "is_configured": true, 00:07:07.135 "data_offset": 0, 00:07:07.135 "data_size": 65536 00:07:07.135 }, 00:07:07.135 { 00:07:07.135 "name": "BaseBdev2", 00:07:07.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.135 "is_configured": false, 00:07:07.135 "data_offset": 0, 00:07:07.135 "data_size": 0 00:07:07.135 } 00:07:07.135 ] 00:07:07.135 }' 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.135 21:38:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.395 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.396 [2024-09-29 21:38:26.323012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:07.396 [2024-09-29 21:38:26.323174] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:07.396 [2024-09-29 21:38:26.323204] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:07.396 [2024-09-29 21:38:26.323546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.396 [2024-09-29 21:38:26.323769] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:07.396 [2024-09-29 21:38:26.323822] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:07.396 [2024-09-29 21:38:26.324179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.396 BaseBdev2 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.396 [ 00:07:07.396 { 00:07:07.396 "name": "BaseBdev2", 00:07:07.396 "aliases": [ 00:07:07.396 "0f339fc8-4015-420b-9145-5b3e2296cffb" 00:07:07.396 ], 00:07:07.396 "product_name": "Malloc disk", 00:07:07.396 "block_size": 512, 00:07:07.396 "num_blocks": 65536, 00:07:07.396 "uuid": "0f339fc8-4015-420b-9145-5b3e2296cffb", 00:07:07.396 "assigned_rate_limits": { 00:07:07.396 "rw_ios_per_sec": 0, 00:07:07.396 "rw_mbytes_per_sec": 0, 00:07:07.396 "r_mbytes_per_sec": 0, 00:07:07.396 "w_mbytes_per_sec": 0 00:07:07.396 }, 00:07:07.396 "claimed": true, 00:07:07.396 "claim_type": "exclusive_write", 00:07:07.396 "zoned": false, 00:07:07.396 "supported_io_types": { 00:07:07.396 "read": true, 00:07:07.396 "write": true, 00:07:07.396 "unmap": true, 00:07:07.396 "flush": true, 00:07:07.396 "reset": true, 00:07:07.396 "nvme_admin": false, 00:07:07.396 "nvme_io": false, 00:07:07.396 "nvme_io_md": false, 00:07:07.396 "write_zeroes": true, 00:07:07.396 "zcopy": true, 00:07:07.396 "get_zone_info": false, 00:07:07.396 "zone_management": false, 00:07:07.396 "zone_append": false, 00:07:07.396 "compare": false, 00:07:07.396 "compare_and_write": false, 00:07:07.396 "abort": true, 00:07:07.396 "seek_hole": false, 00:07:07.396 "seek_data": false, 00:07:07.396 "copy": true, 00:07:07.396 "nvme_iov_md": false 00:07:07.396 }, 00:07:07.396 "memory_domains": [ 00:07:07.396 { 00:07:07.396 "dma_device_id": "system", 00:07:07.396 "dma_device_type": 1 00:07:07.396 }, 00:07:07.396 { 00:07:07.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.396 "dma_device_type": 2 00:07:07.396 } 00:07:07.396 ], 00:07:07.396 "driver_specific": {} 00:07:07.396 } 00:07:07.396 ] 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.396 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.656 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.656 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.656 "name": "Existed_Raid", 00:07:07.656 "uuid": "d889b68f-8930-45bc-b0fd-b3b06f6ea03f", 00:07:07.656 "strip_size_kb": 64, 00:07:07.656 "state": "online", 00:07:07.656 "raid_level": "raid0", 00:07:07.656 "superblock": false, 00:07:07.656 "num_base_bdevs": 2, 00:07:07.656 "num_base_bdevs_discovered": 2, 00:07:07.656 "num_base_bdevs_operational": 2, 00:07:07.656 "base_bdevs_list": [ 00:07:07.656 { 00:07:07.656 "name": "BaseBdev1", 00:07:07.656 "uuid": "6873c647-f692-463e-a494-4b580e1218fd", 00:07:07.656 "is_configured": true, 00:07:07.656 "data_offset": 0, 00:07:07.656 "data_size": 65536 00:07:07.656 }, 00:07:07.656 { 00:07:07.656 "name": "BaseBdev2", 00:07:07.656 "uuid": "0f339fc8-4015-420b-9145-5b3e2296cffb", 00:07:07.656 "is_configured": true, 00:07:07.656 "data_offset": 0, 00:07:07.656 "data_size": 65536 00:07:07.656 } 00:07:07.656 ] 00:07:07.656 }' 00:07:07.656 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.656 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.916 [2024-09-29 21:38:26.778575] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.916 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:07.916 "name": "Existed_Raid", 00:07:07.916 "aliases": [ 00:07:07.916 "d889b68f-8930-45bc-b0fd-b3b06f6ea03f" 00:07:07.916 ], 00:07:07.916 "product_name": "Raid Volume", 00:07:07.916 "block_size": 512, 00:07:07.916 "num_blocks": 131072, 00:07:07.916 "uuid": "d889b68f-8930-45bc-b0fd-b3b06f6ea03f", 00:07:07.916 "assigned_rate_limits": { 00:07:07.916 "rw_ios_per_sec": 0, 00:07:07.916 "rw_mbytes_per_sec": 0, 00:07:07.916 "r_mbytes_per_sec": 0, 00:07:07.916 "w_mbytes_per_sec": 0 00:07:07.917 }, 00:07:07.917 "claimed": false, 00:07:07.917 "zoned": false, 00:07:07.917 "supported_io_types": { 00:07:07.917 "read": true, 00:07:07.917 "write": true, 00:07:07.917 "unmap": true, 00:07:07.917 "flush": true, 00:07:07.917 "reset": true, 00:07:07.917 "nvme_admin": false, 00:07:07.917 "nvme_io": false, 00:07:07.917 "nvme_io_md": false, 00:07:07.917 "write_zeroes": true, 00:07:07.917 "zcopy": false, 00:07:07.917 "get_zone_info": false, 00:07:07.917 "zone_management": false, 00:07:07.917 "zone_append": false, 00:07:07.917 "compare": false, 00:07:07.917 "compare_and_write": false, 00:07:07.917 "abort": false, 00:07:07.917 "seek_hole": false, 00:07:07.917 "seek_data": false, 00:07:07.917 "copy": false, 00:07:07.917 "nvme_iov_md": false 00:07:07.917 }, 00:07:07.917 "memory_domains": [ 00:07:07.917 { 00:07:07.917 "dma_device_id": "system", 00:07:07.917 "dma_device_type": 1 00:07:07.917 }, 00:07:07.917 { 00:07:07.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.917 "dma_device_type": 2 00:07:07.917 }, 00:07:07.917 { 00:07:07.917 "dma_device_id": "system", 00:07:07.917 "dma_device_type": 1 00:07:07.917 }, 00:07:07.917 { 00:07:07.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.917 "dma_device_type": 2 00:07:07.917 } 00:07:07.917 ], 00:07:07.917 "driver_specific": { 00:07:07.917 "raid": { 00:07:07.917 "uuid": "d889b68f-8930-45bc-b0fd-b3b06f6ea03f", 00:07:07.917 "strip_size_kb": 64, 00:07:07.917 "state": "online", 00:07:07.917 "raid_level": "raid0", 00:07:07.917 "superblock": false, 00:07:07.917 "num_base_bdevs": 2, 00:07:07.917 "num_base_bdevs_discovered": 2, 00:07:07.917 "num_base_bdevs_operational": 2, 00:07:07.917 "base_bdevs_list": [ 00:07:07.917 { 00:07:07.917 "name": "BaseBdev1", 00:07:07.917 "uuid": "6873c647-f692-463e-a494-4b580e1218fd", 00:07:07.917 "is_configured": true, 00:07:07.917 "data_offset": 0, 00:07:07.917 "data_size": 65536 00:07:07.917 }, 00:07:07.917 { 00:07:07.917 "name": "BaseBdev2", 00:07:07.917 "uuid": "0f339fc8-4015-420b-9145-5b3e2296cffb", 00:07:07.917 "is_configured": true, 00:07:07.917 "data_offset": 0, 00:07:07.917 "data_size": 65536 00:07:07.917 } 00:07:07.917 ] 00:07:07.917 } 00:07:07.917 } 00:07:07.917 }' 00:07:07.917 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:07.917 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:07.917 BaseBdev2' 00:07:07.917 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:07.917 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:07.917 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.177 21:38:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.177 [2024-09-29 21:38:27.005919] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:08.177 [2024-09-29 21:38:27.005953] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.177 [2024-09-29 21:38:27.006013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.177 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.437 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.437 "name": "Existed_Raid", 00:07:08.437 "uuid": "d889b68f-8930-45bc-b0fd-b3b06f6ea03f", 00:07:08.437 "strip_size_kb": 64, 00:07:08.437 "state": "offline", 00:07:08.437 "raid_level": "raid0", 00:07:08.437 "superblock": false, 00:07:08.437 "num_base_bdevs": 2, 00:07:08.437 "num_base_bdevs_discovered": 1, 00:07:08.437 "num_base_bdevs_operational": 1, 00:07:08.437 "base_bdevs_list": [ 00:07:08.437 { 00:07:08.437 "name": null, 00:07:08.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.437 "is_configured": false, 00:07:08.437 "data_offset": 0, 00:07:08.437 "data_size": 65536 00:07:08.437 }, 00:07:08.437 { 00:07:08.437 "name": "BaseBdev2", 00:07:08.437 "uuid": "0f339fc8-4015-420b-9145-5b3e2296cffb", 00:07:08.437 "is_configured": true, 00:07:08.437 "data_offset": 0, 00:07:08.437 "data_size": 65536 00:07:08.437 } 00:07:08.437 ] 00:07:08.437 }' 00:07:08.437 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.437 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.697 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.697 [2024-09-29 21:38:27.611355] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:08.697 [2024-09-29 21:38:27.611421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60748 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60748 ']' 00:07:08.956 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60748 00:07:08.957 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:08.957 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.957 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60748 00:07:08.957 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.957 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.957 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60748' 00:07:08.957 killing process with pid 60748 00:07:08.957 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60748 00:07:08.957 [2024-09-29 21:38:27.808850] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:08.957 21:38:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60748 00:07:08.957 [2024-09-29 21:38:27.825524] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:10.337 00:07:10.337 real 0m5.271s 00:07:10.337 user 0m7.382s 00:07:10.337 sys 0m0.926s 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.337 ************************************ 00:07:10.337 END TEST raid_state_function_test 00:07:10.337 ************************************ 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.337 21:38:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:10.337 21:38:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:10.337 21:38:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.337 21:38:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.337 ************************************ 00:07:10.337 START TEST raid_state_function_test_sb 00:07:10.337 ************************************ 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:10.337 Process raid pid: 61002 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61002 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61002' 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61002 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61002 ']' 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.337 21:38:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:10.597 [2024-09-29 21:38:29.322225] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:10.597 [2024-09-29 21:38:29.322408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.597 [2024-09-29 21:38:29.484932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.857 [2024-09-29 21:38:29.730479] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.117 [2024-09-29 21:38:29.969847] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.117 [2024-09-29 21:38:29.969882] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.377 [2024-09-29 21:38:30.148247] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.377 [2024-09-29 21:38:30.148308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.377 [2024-09-29 21:38:30.148318] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.377 [2024-09-29 21:38:30.148327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.377 "name": "Existed_Raid", 00:07:11.377 "uuid": "af430268-5ab2-4416-b78f-a9fad411931d", 00:07:11.377 "strip_size_kb": 64, 00:07:11.377 "state": "configuring", 00:07:11.377 "raid_level": "raid0", 00:07:11.377 "superblock": true, 00:07:11.377 "num_base_bdevs": 2, 00:07:11.377 "num_base_bdevs_discovered": 0, 00:07:11.377 "num_base_bdevs_operational": 2, 00:07:11.377 "base_bdevs_list": [ 00:07:11.377 { 00:07:11.377 "name": "BaseBdev1", 00:07:11.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.377 "is_configured": false, 00:07:11.377 "data_offset": 0, 00:07:11.377 "data_size": 0 00:07:11.377 }, 00:07:11.377 { 00:07:11.377 "name": "BaseBdev2", 00:07:11.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.377 "is_configured": false, 00:07:11.377 "data_offset": 0, 00:07:11.377 "data_size": 0 00:07:11.377 } 00:07:11.377 ] 00:07:11.377 }' 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.377 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.637 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.638 [2024-09-29 21:38:30.587394] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:11.638 [2024-09-29 21:38:30.587511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.638 [2024-09-29 21:38:30.599405] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.638 [2024-09-29 21:38:30.599503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.638 [2024-09-29 21:38:30.599531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.638 [2024-09-29 21:38:30.599557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.638 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.897 [2024-09-29 21:38:30.674365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:11.897 BaseBdev1 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.897 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.897 [ 00:07:11.897 { 00:07:11.898 "name": "BaseBdev1", 00:07:11.898 "aliases": [ 00:07:11.898 "d968e369-7a50-4f95-bbf6-00f4038e74bb" 00:07:11.898 ], 00:07:11.898 "product_name": "Malloc disk", 00:07:11.898 "block_size": 512, 00:07:11.898 "num_blocks": 65536, 00:07:11.898 "uuid": "d968e369-7a50-4f95-bbf6-00f4038e74bb", 00:07:11.898 "assigned_rate_limits": { 00:07:11.898 "rw_ios_per_sec": 0, 00:07:11.898 "rw_mbytes_per_sec": 0, 00:07:11.898 "r_mbytes_per_sec": 0, 00:07:11.898 "w_mbytes_per_sec": 0 00:07:11.898 }, 00:07:11.898 "claimed": true, 00:07:11.898 "claim_type": "exclusive_write", 00:07:11.898 "zoned": false, 00:07:11.898 "supported_io_types": { 00:07:11.898 "read": true, 00:07:11.898 "write": true, 00:07:11.898 "unmap": true, 00:07:11.898 "flush": true, 00:07:11.898 "reset": true, 00:07:11.898 "nvme_admin": false, 00:07:11.898 "nvme_io": false, 00:07:11.898 "nvme_io_md": false, 00:07:11.898 "write_zeroes": true, 00:07:11.898 "zcopy": true, 00:07:11.898 "get_zone_info": false, 00:07:11.898 "zone_management": false, 00:07:11.898 "zone_append": false, 00:07:11.898 "compare": false, 00:07:11.898 "compare_and_write": false, 00:07:11.898 "abort": true, 00:07:11.898 "seek_hole": false, 00:07:11.898 "seek_data": false, 00:07:11.898 "copy": true, 00:07:11.898 "nvme_iov_md": false 00:07:11.898 }, 00:07:11.898 "memory_domains": [ 00:07:11.898 { 00:07:11.898 "dma_device_id": "system", 00:07:11.898 "dma_device_type": 1 00:07:11.898 }, 00:07:11.898 { 00:07:11.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.898 "dma_device_type": 2 00:07:11.898 } 00:07:11.898 ], 00:07:11.898 "driver_specific": {} 00:07:11.898 } 00:07:11.898 ] 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.898 "name": "Existed_Raid", 00:07:11.898 "uuid": "127a3d73-6794-44ae-ad09-5adbce86a67f", 00:07:11.898 "strip_size_kb": 64, 00:07:11.898 "state": "configuring", 00:07:11.898 "raid_level": "raid0", 00:07:11.898 "superblock": true, 00:07:11.898 "num_base_bdevs": 2, 00:07:11.898 "num_base_bdevs_discovered": 1, 00:07:11.898 "num_base_bdevs_operational": 2, 00:07:11.898 "base_bdevs_list": [ 00:07:11.898 { 00:07:11.898 "name": "BaseBdev1", 00:07:11.898 "uuid": "d968e369-7a50-4f95-bbf6-00f4038e74bb", 00:07:11.898 "is_configured": true, 00:07:11.898 "data_offset": 2048, 00:07:11.898 "data_size": 63488 00:07:11.898 }, 00:07:11.898 { 00:07:11.898 "name": "BaseBdev2", 00:07:11.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:11.898 "is_configured": false, 00:07:11.898 "data_offset": 0, 00:07:11.898 "data_size": 0 00:07:11.898 } 00:07:11.898 ] 00:07:11.898 }' 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.898 21:38:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.158 [2024-09-29 21:38:31.121623] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.158 [2024-09-29 21:38:31.121669] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.158 [2024-09-29 21:38:31.129656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.158 [2024-09-29 21:38:31.131745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.158 [2024-09-29 21:38:31.131841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.158 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.159 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.159 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.159 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.159 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.159 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.159 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.419 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.419 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.419 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.419 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.419 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.419 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.419 "name": "Existed_Raid", 00:07:12.419 "uuid": "856fbbff-c336-488e-a9e8-bc06f8bf998e", 00:07:12.419 "strip_size_kb": 64, 00:07:12.419 "state": "configuring", 00:07:12.419 "raid_level": "raid0", 00:07:12.419 "superblock": true, 00:07:12.419 "num_base_bdevs": 2, 00:07:12.419 "num_base_bdevs_discovered": 1, 00:07:12.419 "num_base_bdevs_operational": 2, 00:07:12.419 "base_bdevs_list": [ 00:07:12.419 { 00:07:12.419 "name": "BaseBdev1", 00:07:12.419 "uuid": "d968e369-7a50-4f95-bbf6-00f4038e74bb", 00:07:12.419 "is_configured": true, 00:07:12.419 "data_offset": 2048, 00:07:12.419 "data_size": 63488 00:07:12.419 }, 00:07:12.419 { 00:07:12.419 "name": "BaseBdev2", 00:07:12.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.419 "is_configured": false, 00:07:12.419 "data_offset": 0, 00:07:12.419 "data_size": 0 00:07:12.419 } 00:07:12.419 ] 00:07:12.419 }' 00:07:12.419 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.419 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.679 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:12.679 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.679 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.679 BaseBdev2 00:07:12.679 [2024-09-29 21:38:31.657551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.679 [2024-09-29 21:38:31.657827] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:12.679 [2024-09-29 21:38:31.657844] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.679 [2024-09-29 21:38:31.658164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:12.679 [2024-09-29 21:38:31.658337] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:12.679 [2024-09-29 21:38:31.658351] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:12.679 [2024-09-29 21:38:31.658518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.679 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.679 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:12.679 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:12.679 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:12.679 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:12.679 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:12.680 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:12.680 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:12.680 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.680 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.939 [ 00:07:12.939 { 00:07:12.939 "name": "BaseBdev2", 00:07:12.939 "aliases": [ 00:07:12.939 "917995cf-4a44-4412-b32b-32f66e981513" 00:07:12.939 ], 00:07:12.939 "product_name": "Malloc disk", 00:07:12.939 "block_size": 512, 00:07:12.939 "num_blocks": 65536, 00:07:12.939 "uuid": "917995cf-4a44-4412-b32b-32f66e981513", 00:07:12.939 "assigned_rate_limits": { 00:07:12.939 "rw_ios_per_sec": 0, 00:07:12.939 "rw_mbytes_per_sec": 0, 00:07:12.939 "r_mbytes_per_sec": 0, 00:07:12.939 "w_mbytes_per_sec": 0 00:07:12.939 }, 00:07:12.939 "claimed": true, 00:07:12.939 "claim_type": "exclusive_write", 00:07:12.939 "zoned": false, 00:07:12.939 "supported_io_types": { 00:07:12.939 "read": true, 00:07:12.939 "write": true, 00:07:12.939 "unmap": true, 00:07:12.939 "flush": true, 00:07:12.939 "reset": true, 00:07:12.939 "nvme_admin": false, 00:07:12.939 "nvme_io": false, 00:07:12.939 "nvme_io_md": false, 00:07:12.939 "write_zeroes": true, 00:07:12.939 "zcopy": true, 00:07:12.939 "get_zone_info": false, 00:07:12.939 "zone_management": false, 00:07:12.939 "zone_append": false, 00:07:12.939 "compare": false, 00:07:12.939 "compare_and_write": false, 00:07:12.939 "abort": true, 00:07:12.939 "seek_hole": false, 00:07:12.939 "seek_data": false, 00:07:12.939 "copy": true, 00:07:12.939 "nvme_iov_md": false 00:07:12.939 }, 00:07:12.939 "memory_domains": [ 00:07:12.939 { 00:07:12.939 "dma_device_id": "system", 00:07:12.939 "dma_device_type": 1 00:07:12.939 }, 00:07:12.939 { 00:07:12.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.939 "dma_device_type": 2 00:07:12.939 } 00:07:12.939 ], 00:07:12.939 "driver_specific": {} 00:07:12.939 } 00:07:12.939 ] 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.939 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.939 "name": "Existed_Raid", 00:07:12.939 "uuid": "856fbbff-c336-488e-a9e8-bc06f8bf998e", 00:07:12.939 "strip_size_kb": 64, 00:07:12.939 "state": "online", 00:07:12.940 "raid_level": "raid0", 00:07:12.940 "superblock": true, 00:07:12.940 "num_base_bdevs": 2, 00:07:12.940 "num_base_bdevs_discovered": 2, 00:07:12.940 "num_base_bdevs_operational": 2, 00:07:12.940 "base_bdevs_list": [ 00:07:12.940 { 00:07:12.940 "name": "BaseBdev1", 00:07:12.940 "uuid": "d968e369-7a50-4f95-bbf6-00f4038e74bb", 00:07:12.940 "is_configured": true, 00:07:12.940 "data_offset": 2048, 00:07:12.940 "data_size": 63488 00:07:12.940 }, 00:07:12.940 { 00:07:12.940 "name": "BaseBdev2", 00:07:12.940 "uuid": "917995cf-4a44-4412-b32b-32f66e981513", 00:07:12.940 "is_configured": true, 00:07:12.940 "data_offset": 2048, 00:07:12.940 "data_size": 63488 00:07:12.940 } 00:07:12.940 ] 00:07:12.940 }' 00:07:12.940 21:38:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.940 21:38:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.220 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.220 [2024-09-29 21:38:32.192937] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.487 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.487 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:13.487 "name": "Existed_Raid", 00:07:13.487 "aliases": [ 00:07:13.487 "856fbbff-c336-488e-a9e8-bc06f8bf998e" 00:07:13.487 ], 00:07:13.487 "product_name": "Raid Volume", 00:07:13.487 "block_size": 512, 00:07:13.487 "num_blocks": 126976, 00:07:13.487 "uuid": "856fbbff-c336-488e-a9e8-bc06f8bf998e", 00:07:13.487 "assigned_rate_limits": { 00:07:13.487 "rw_ios_per_sec": 0, 00:07:13.487 "rw_mbytes_per_sec": 0, 00:07:13.487 "r_mbytes_per_sec": 0, 00:07:13.487 "w_mbytes_per_sec": 0 00:07:13.487 }, 00:07:13.487 "claimed": false, 00:07:13.487 "zoned": false, 00:07:13.487 "supported_io_types": { 00:07:13.487 "read": true, 00:07:13.487 "write": true, 00:07:13.487 "unmap": true, 00:07:13.487 "flush": true, 00:07:13.487 "reset": true, 00:07:13.487 "nvme_admin": false, 00:07:13.487 "nvme_io": false, 00:07:13.487 "nvme_io_md": false, 00:07:13.487 "write_zeroes": true, 00:07:13.487 "zcopy": false, 00:07:13.487 "get_zone_info": false, 00:07:13.487 "zone_management": false, 00:07:13.487 "zone_append": false, 00:07:13.487 "compare": false, 00:07:13.487 "compare_and_write": false, 00:07:13.487 "abort": false, 00:07:13.487 "seek_hole": false, 00:07:13.487 "seek_data": false, 00:07:13.487 "copy": false, 00:07:13.487 "nvme_iov_md": false 00:07:13.487 }, 00:07:13.487 "memory_domains": [ 00:07:13.487 { 00:07:13.487 "dma_device_id": "system", 00:07:13.487 "dma_device_type": 1 00:07:13.487 }, 00:07:13.487 { 00:07:13.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.487 "dma_device_type": 2 00:07:13.487 }, 00:07:13.487 { 00:07:13.487 "dma_device_id": "system", 00:07:13.487 "dma_device_type": 1 00:07:13.487 }, 00:07:13.487 { 00:07:13.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.487 "dma_device_type": 2 00:07:13.487 } 00:07:13.487 ], 00:07:13.487 "driver_specific": { 00:07:13.487 "raid": { 00:07:13.487 "uuid": "856fbbff-c336-488e-a9e8-bc06f8bf998e", 00:07:13.487 "strip_size_kb": 64, 00:07:13.487 "state": "online", 00:07:13.487 "raid_level": "raid0", 00:07:13.487 "superblock": true, 00:07:13.487 "num_base_bdevs": 2, 00:07:13.487 "num_base_bdevs_discovered": 2, 00:07:13.487 "num_base_bdevs_operational": 2, 00:07:13.487 "base_bdevs_list": [ 00:07:13.487 { 00:07:13.487 "name": "BaseBdev1", 00:07:13.487 "uuid": "d968e369-7a50-4f95-bbf6-00f4038e74bb", 00:07:13.487 "is_configured": true, 00:07:13.487 "data_offset": 2048, 00:07:13.487 "data_size": 63488 00:07:13.487 }, 00:07:13.487 { 00:07:13.487 "name": "BaseBdev2", 00:07:13.487 "uuid": "917995cf-4a44-4412-b32b-32f66e981513", 00:07:13.487 "is_configured": true, 00:07:13.487 "data_offset": 2048, 00:07:13.487 "data_size": 63488 00:07:13.487 } 00:07:13.487 ] 00:07:13.487 } 00:07:13.487 } 00:07:13.487 }' 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:13.488 BaseBdev2' 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.488 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.488 [2024-09-29 21:38:32.384497] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:13.488 [2024-09-29 21:38:32.384577] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:13.488 [2024-09-29 21:38:32.384655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.748 "name": "Existed_Raid", 00:07:13.748 "uuid": "856fbbff-c336-488e-a9e8-bc06f8bf998e", 00:07:13.748 "strip_size_kb": 64, 00:07:13.748 "state": "offline", 00:07:13.748 "raid_level": "raid0", 00:07:13.748 "superblock": true, 00:07:13.748 "num_base_bdevs": 2, 00:07:13.748 "num_base_bdevs_discovered": 1, 00:07:13.748 "num_base_bdevs_operational": 1, 00:07:13.748 "base_bdevs_list": [ 00:07:13.748 { 00:07:13.748 "name": null, 00:07:13.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.748 "is_configured": false, 00:07:13.748 "data_offset": 0, 00:07:13.748 "data_size": 63488 00:07:13.748 }, 00:07:13.748 { 00:07:13.748 "name": "BaseBdev2", 00:07:13.748 "uuid": "917995cf-4a44-4412-b32b-32f66e981513", 00:07:13.748 "is_configured": true, 00:07:13.748 "data_offset": 2048, 00:07:13.748 "data_size": 63488 00:07:13.748 } 00:07:13.748 ] 00:07:13.748 }' 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.748 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.006 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:14.006 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:14.006 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.006 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.006 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.006 21:38:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:14.006 21:38:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.265 [2024-09-29 21:38:33.013953] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:14.265 [2024-09-29 21:38:33.014018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61002 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61002 ']' 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61002 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61002 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.265 killing process with pid 61002 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61002' 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61002 00:07:14.265 [2024-09-29 21:38:33.206217] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.265 21:38:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61002 00:07:14.265 [2024-09-29 21:38:33.223687] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:15.647 ************************************ 00:07:15.647 END TEST raid_state_function_test_sb 00:07:15.647 ************************************ 00:07:15.647 21:38:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:15.647 00:07:15.647 real 0m5.335s 00:07:15.647 user 0m7.456s 00:07:15.647 sys 0m0.946s 00:07:15.647 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.647 21:38:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:15.647 21:38:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:15.647 21:38:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:15.647 21:38:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.647 21:38:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:15.907 ************************************ 00:07:15.907 START TEST raid_superblock_test 00:07:15.907 ************************************ 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61254 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61254 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61254 ']' 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.907 21:38:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.907 [2024-09-29 21:38:34.728099] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:15.907 [2024-09-29 21:38:34.728231] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61254 ] 00:07:16.166 [2024-09-29 21:38:34.897226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.166 [2024-09-29 21:38:35.137206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.425 [2024-09-29 21:38:35.355732] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.425 [2024-09-29 21:38:35.355843] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.685 malloc1 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.685 [2024-09-29 21:38:35.600091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:16.685 [2024-09-29 21:38:35.600247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.685 [2024-09-29 21:38:35.600297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:16.685 [2024-09-29 21:38:35.600330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.685 [2024-09-29 21:38:35.602724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.685 [2024-09-29 21:38:35.602814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:16.685 pt1 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.685 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.945 malloc2 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.945 [2024-09-29 21:38:35.698007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:16.945 [2024-09-29 21:38:35.698080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.945 [2024-09-29 21:38:35.698122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:16.945 [2024-09-29 21:38:35.698133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.945 [2024-09-29 21:38:35.700545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.945 [2024-09-29 21:38:35.700582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:16.945 pt2 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.945 [2024-09-29 21:38:35.710077] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:16.945 [2024-09-29 21:38:35.712179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:16.945 [2024-09-29 21:38:35.712351] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:16.945 [2024-09-29 21:38:35.712366] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:16.945 [2024-09-29 21:38:35.712614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:16.945 [2024-09-29 21:38:35.712762] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:16.945 [2024-09-29 21:38:35.712775] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:16.945 [2024-09-29 21:38:35.712931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.945 "name": "raid_bdev1", 00:07:16.945 "uuid": "5c235ae7-f813-4ca3-a0d2-17a803db40e4", 00:07:16.945 "strip_size_kb": 64, 00:07:16.945 "state": "online", 00:07:16.945 "raid_level": "raid0", 00:07:16.945 "superblock": true, 00:07:16.945 "num_base_bdevs": 2, 00:07:16.945 "num_base_bdevs_discovered": 2, 00:07:16.945 "num_base_bdevs_operational": 2, 00:07:16.945 "base_bdevs_list": [ 00:07:16.945 { 00:07:16.945 "name": "pt1", 00:07:16.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.945 "is_configured": true, 00:07:16.945 "data_offset": 2048, 00:07:16.945 "data_size": 63488 00:07:16.945 }, 00:07:16.945 { 00:07:16.945 "name": "pt2", 00:07:16.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.945 "is_configured": true, 00:07:16.945 "data_offset": 2048, 00:07:16.945 "data_size": 63488 00:07:16.945 } 00:07:16.945 ] 00:07:16.945 }' 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.945 21:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:17.205 [2024-09-29 21:38:36.161494] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.205 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:17.464 "name": "raid_bdev1", 00:07:17.464 "aliases": [ 00:07:17.464 "5c235ae7-f813-4ca3-a0d2-17a803db40e4" 00:07:17.464 ], 00:07:17.464 "product_name": "Raid Volume", 00:07:17.464 "block_size": 512, 00:07:17.464 "num_blocks": 126976, 00:07:17.464 "uuid": "5c235ae7-f813-4ca3-a0d2-17a803db40e4", 00:07:17.464 "assigned_rate_limits": { 00:07:17.464 "rw_ios_per_sec": 0, 00:07:17.464 "rw_mbytes_per_sec": 0, 00:07:17.464 "r_mbytes_per_sec": 0, 00:07:17.464 "w_mbytes_per_sec": 0 00:07:17.464 }, 00:07:17.464 "claimed": false, 00:07:17.464 "zoned": false, 00:07:17.464 "supported_io_types": { 00:07:17.464 "read": true, 00:07:17.464 "write": true, 00:07:17.464 "unmap": true, 00:07:17.464 "flush": true, 00:07:17.464 "reset": true, 00:07:17.464 "nvme_admin": false, 00:07:17.464 "nvme_io": false, 00:07:17.464 "nvme_io_md": false, 00:07:17.464 "write_zeroes": true, 00:07:17.464 "zcopy": false, 00:07:17.464 "get_zone_info": false, 00:07:17.464 "zone_management": false, 00:07:17.464 "zone_append": false, 00:07:17.464 "compare": false, 00:07:17.464 "compare_and_write": false, 00:07:17.464 "abort": false, 00:07:17.464 "seek_hole": false, 00:07:17.464 "seek_data": false, 00:07:17.464 "copy": false, 00:07:17.464 "nvme_iov_md": false 00:07:17.464 }, 00:07:17.464 "memory_domains": [ 00:07:17.464 { 00:07:17.464 "dma_device_id": "system", 00:07:17.464 "dma_device_type": 1 00:07:17.464 }, 00:07:17.464 { 00:07:17.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.464 "dma_device_type": 2 00:07:17.464 }, 00:07:17.464 { 00:07:17.464 "dma_device_id": "system", 00:07:17.464 "dma_device_type": 1 00:07:17.464 }, 00:07:17.464 { 00:07:17.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.464 "dma_device_type": 2 00:07:17.464 } 00:07:17.464 ], 00:07:17.464 "driver_specific": { 00:07:17.464 "raid": { 00:07:17.464 "uuid": "5c235ae7-f813-4ca3-a0d2-17a803db40e4", 00:07:17.464 "strip_size_kb": 64, 00:07:17.464 "state": "online", 00:07:17.464 "raid_level": "raid0", 00:07:17.464 "superblock": true, 00:07:17.464 "num_base_bdevs": 2, 00:07:17.464 "num_base_bdevs_discovered": 2, 00:07:17.464 "num_base_bdevs_operational": 2, 00:07:17.464 "base_bdevs_list": [ 00:07:17.464 { 00:07:17.464 "name": "pt1", 00:07:17.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.464 "is_configured": true, 00:07:17.464 "data_offset": 2048, 00:07:17.464 "data_size": 63488 00:07:17.464 }, 00:07:17.464 { 00:07:17.464 "name": "pt2", 00:07:17.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.464 "is_configured": true, 00:07:17.464 "data_offset": 2048, 00:07:17.464 "data_size": 63488 00:07:17.464 } 00:07:17.464 ] 00:07:17.464 } 00:07:17.464 } 00:07:17.464 }' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:17.464 pt2' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.464 [2024-09-29 21:38:36.377109] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5c235ae7-f813-4ca3-a0d2-17a803db40e4 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5c235ae7-f813-4ca3-a0d2-17a803db40e4 ']' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.464 [2024-09-29 21:38:36.428781] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:17.464 [2024-09-29 21:38:36.428850] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:17.464 [2024-09-29 21:38:36.428952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.464 [2024-09-29 21:38:36.429009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.464 [2024-09-29 21:38:36.429053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.464 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.724 [2024-09-29 21:38:36.568549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:17.724 [2024-09-29 21:38:36.570680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:17.724 [2024-09-29 21:38:36.570789] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:17.724 [2024-09-29 21:38:36.570881] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:17.724 [2024-09-29 21:38:36.570931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:17.724 [2024-09-29 21:38:36.570984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:17.724 request: 00:07:17.724 { 00:07:17.724 "name": "raid_bdev1", 00:07:17.724 "raid_level": "raid0", 00:07:17.724 "base_bdevs": [ 00:07:17.724 "malloc1", 00:07:17.724 "malloc2" 00:07:17.724 ], 00:07:17.724 "strip_size_kb": 64, 00:07:17.724 "superblock": false, 00:07:17.724 "method": "bdev_raid_create", 00:07:17.724 "req_id": 1 00:07:17.724 } 00:07:17.724 Got JSON-RPC error response 00:07:17.724 response: 00:07:17.724 { 00:07:17.724 "code": -17, 00:07:17.724 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:17.724 } 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.724 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.724 [2024-09-29 21:38:36.640412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:17.724 [2024-09-29 21:38:36.640461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.724 [2024-09-29 21:38:36.640479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:17.725 [2024-09-29 21:38:36.640491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.725 [2024-09-29 21:38:36.642833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.725 [2024-09-29 21:38:36.642871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:17.725 [2024-09-29 21:38:36.642931] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:17.725 [2024-09-29 21:38:36.642987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:17.725 pt1 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.725 "name": "raid_bdev1", 00:07:17.725 "uuid": "5c235ae7-f813-4ca3-a0d2-17a803db40e4", 00:07:17.725 "strip_size_kb": 64, 00:07:17.725 "state": "configuring", 00:07:17.725 "raid_level": "raid0", 00:07:17.725 "superblock": true, 00:07:17.725 "num_base_bdevs": 2, 00:07:17.725 "num_base_bdevs_discovered": 1, 00:07:17.725 "num_base_bdevs_operational": 2, 00:07:17.725 "base_bdevs_list": [ 00:07:17.725 { 00:07:17.725 "name": "pt1", 00:07:17.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.725 "is_configured": true, 00:07:17.725 "data_offset": 2048, 00:07:17.725 "data_size": 63488 00:07:17.725 }, 00:07:17.725 { 00:07:17.725 "name": null, 00:07:17.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.725 "is_configured": false, 00:07:17.725 "data_offset": 2048, 00:07:17.725 "data_size": 63488 00:07:17.725 } 00:07:17.725 ] 00:07:17.725 }' 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.725 21:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.294 [2024-09-29 21:38:37.115620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:18.294 [2024-09-29 21:38:37.115747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.294 [2024-09-29 21:38:37.115785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:18.294 [2024-09-29 21:38:37.115813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.294 [2024-09-29 21:38:37.116332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.294 [2024-09-29 21:38:37.116408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:18.294 [2024-09-29 21:38:37.116507] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:18.294 [2024-09-29 21:38:37.116557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:18.294 [2024-09-29 21:38:37.116685] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:18.294 [2024-09-29 21:38:37.116726] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:18.294 [2024-09-29 21:38:37.116988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:18.294 [2024-09-29 21:38:37.117188] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:18.294 [2024-09-29 21:38:37.117230] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:18.294 [2024-09-29 21:38:37.117404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.294 pt2 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.294 "name": "raid_bdev1", 00:07:18.294 "uuid": "5c235ae7-f813-4ca3-a0d2-17a803db40e4", 00:07:18.294 "strip_size_kb": 64, 00:07:18.294 "state": "online", 00:07:18.294 "raid_level": "raid0", 00:07:18.294 "superblock": true, 00:07:18.294 "num_base_bdevs": 2, 00:07:18.294 "num_base_bdevs_discovered": 2, 00:07:18.294 "num_base_bdevs_operational": 2, 00:07:18.294 "base_bdevs_list": [ 00:07:18.294 { 00:07:18.294 "name": "pt1", 00:07:18.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:18.294 "is_configured": true, 00:07:18.294 "data_offset": 2048, 00:07:18.294 "data_size": 63488 00:07:18.294 }, 00:07:18.294 { 00:07:18.294 "name": "pt2", 00:07:18.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.294 "is_configured": true, 00:07:18.294 "data_offset": 2048, 00:07:18.294 "data_size": 63488 00:07:18.294 } 00:07:18.294 ] 00:07:18.294 }' 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.294 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.553 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:18.554 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:18.554 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:18.554 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:18.554 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:18.554 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:18.554 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:18.554 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.554 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.554 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:18.813 [2024-09-29 21:38:37.543112] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:18.813 "name": "raid_bdev1", 00:07:18.813 "aliases": [ 00:07:18.813 "5c235ae7-f813-4ca3-a0d2-17a803db40e4" 00:07:18.813 ], 00:07:18.813 "product_name": "Raid Volume", 00:07:18.813 "block_size": 512, 00:07:18.813 "num_blocks": 126976, 00:07:18.813 "uuid": "5c235ae7-f813-4ca3-a0d2-17a803db40e4", 00:07:18.813 "assigned_rate_limits": { 00:07:18.813 "rw_ios_per_sec": 0, 00:07:18.813 "rw_mbytes_per_sec": 0, 00:07:18.813 "r_mbytes_per_sec": 0, 00:07:18.813 "w_mbytes_per_sec": 0 00:07:18.813 }, 00:07:18.813 "claimed": false, 00:07:18.813 "zoned": false, 00:07:18.813 "supported_io_types": { 00:07:18.813 "read": true, 00:07:18.813 "write": true, 00:07:18.813 "unmap": true, 00:07:18.813 "flush": true, 00:07:18.813 "reset": true, 00:07:18.813 "nvme_admin": false, 00:07:18.813 "nvme_io": false, 00:07:18.813 "nvme_io_md": false, 00:07:18.813 "write_zeroes": true, 00:07:18.813 "zcopy": false, 00:07:18.813 "get_zone_info": false, 00:07:18.813 "zone_management": false, 00:07:18.813 "zone_append": false, 00:07:18.813 "compare": false, 00:07:18.813 "compare_and_write": false, 00:07:18.813 "abort": false, 00:07:18.813 "seek_hole": false, 00:07:18.813 "seek_data": false, 00:07:18.813 "copy": false, 00:07:18.813 "nvme_iov_md": false 00:07:18.813 }, 00:07:18.813 "memory_domains": [ 00:07:18.813 { 00:07:18.813 "dma_device_id": "system", 00:07:18.813 "dma_device_type": 1 00:07:18.813 }, 00:07:18.813 { 00:07:18.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.813 "dma_device_type": 2 00:07:18.813 }, 00:07:18.813 { 00:07:18.813 "dma_device_id": "system", 00:07:18.813 "dma_device_type": 1 00:07:18.813 }, 00:07:18.813 { 00:07:18.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.813 "dma_device_type": 2 00:07:18.813 } 00:07:18.813 ], 00:07:18.813 "driver_specific": { 00:07:18.813 "raid": { 00:07:18.813 "uuid": "5c235ae7-f813-4ca3-a0d2-17a803db40e4", 00:07:18.813 "strip_size_kb": 64, 00:07:18.813 "state": "online", 00:07:18.813 "raid_level": "raid0", 00:07:18.813 "superblock": true, 00:07:18.813 "num_base_bdevs": 2, 00:07:18.813 "num_base_bdevs_discovered": 2, 00:07:18.813 "num_base_bdevs_operational": 2, 00:07:18.813 "base_bdevs_list": [ 00:07:18.813 { 00:07:18.813 "name": "pt1", 00:07:18.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:18.813 "is_configured": true, 00:07:18.813 "data_offset": 2048, 00:07:18.813 "data_size": 63488 00:07:18.813 }, 00:07:18.813 { 00:07:18.813 "name": "pt2", 00:07:18.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.813 "is_configured": true, 00:07:18.813 "data_offset": 2048, 00:07:18.813 "data_size": 63488 00:07:18.813 } 00:07:18.813 ] 00:07:18.813 } 00:07:18.813 } 00:07:18.813 }' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:18.813 pt2' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.813 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.813 [2024-09-29 21:38:37.794632] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5c235ae7-f813-4ca3-a0d2-17a803db40e4 '!=' 5c235ae7-f813-4ca3-a0d2-17a803db40e4 ']' 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61254 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61254 ']' 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61254 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61254 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.073 killing process with pid 61254 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61254' 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61254 00:07:19.073 [2024-09-29 21:38:37.881005] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.073 [2024-09-29 21:38:37.881104] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.073 [2024-09-29 21:38:37.881153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.073 [2024-09-29 21:38:37.881165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:19.073 21:38:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61254 00:07:19.333 [2024-09-29 21:38:38.097292] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.714 21:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:20.714 00:07:20.714 real 0m4.776s 00:07:20.714 user 0m6.474s 00:07:20.714 sys 0m0.886s 00:07:20.714 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.714 ************************************ 00:07:20.714 END TEST raid_superblock_test 00:07:20.714 ************************************ 00:07:20.714 21:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.714 21:38:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:20.714 21:38:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:20.714 21:38:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.714 21:38:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.714 ************************************ 00:07:20.714 START TEST raid_read_error_test 00:07:20.714 ************************************ 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gCy60fIBbk 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61466 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61466 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61466 ']' 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.714 21:38:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.714 [2024-09-29 21:38:39.592850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:20.714 [2024-09-29 21:38:39.593022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61466 ] 00:07:20.974 [2024-09-29 21:38:39.753153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.234 [2024-09-29 21:38:40.004767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.494 [2024-09-29 21:38:40.234809] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.494 [2024-09-29 21:38:40.234865] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.494 BaseBdev1_malloc 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.494 true 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.494 [2024-09-29 21:38:40.452798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:21.494 [2024-09-29 21:38:40.452864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.494 [2024-09-29 21:38:40.452882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:21.494 [2024-09-29 21:38:40.452895] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.494 [2024-09-29 21:38:40.455268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.494 [2024-09-29 21:38:40.455306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:21.494 BaseBdev1 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.494 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 BaseBdev2_malloc 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 true 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 [2024-09-29 21:38:40.534964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:21.755 [2024-09-29 21:38:40.535022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.755 [2024-09-29 21:38:40.535052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:21.755 [2024-09-29 21:38:40.535064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.755 [2024-09-29 21:38:40.537395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.755 [2024-09-29 21:38:40.537512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:21.755 BaseBdev2 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 [2024-09-29 21:38:40.547049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.755 [2024-09-29 21:38:40.549098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.755 [2024-09-29 21:38:40.549303] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:21.755 [2024-09-29 21:38:40.549319] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:21.755 [2024-09-29 21:38:40.549559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:21.755 [2024-09-29 21:38:40.549726] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:21.755 [2024-09-29 21:38:40.549747] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:21.755 [2024-09-29 21:38:40.549901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.755 "name": "raid_bdev1", 00:07:21.755 "uuid": "1fd1b562-4743-4e9e-9c57-0ba829efcc98", 00:07:21.755 "strip_size_kb": 64, 00:07:21.755 "state": "online", 00:07:21.755 "raid_level": "raid0", 00:07:21.755 "superblock": true, 00:07:21.755 "num_base_bdevs": 2, 00:07:21.755 "num_base_bdevs_discovered": 2, 00:07:21.755 "num_base_bdevs_operational": 2, 00:07:21.755 "base_bdevs_list": [ 00:07:21.755 { 00:07:21.755 "name": "BaseBdev1", 00:07:21.755 "uuid": "7e05bfb7-a6ef-518d-b330-8f7ed2ec90c1", 00:07:21.755 "is_configured": true, 00:07:21.755 "data_offset": 2048, 00:07:21.755 "data_size": 63488 00:07:21.755 }, 00:07:21.755 { 00:07:21.755 "name": "BaseBdev2", 00:07:21.755 "uuid": "22cd480a-d489-53e8-b5c5-2e084b5e8634", 00:07:21.755 "is_configured": true, 00:07:21.755 "data_offset": 2048, 00:07:21.755 "data_size": 63488 00:07:21.755 } 00:07:21.755 ] 00:07:21.755 }' 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.755 21:38:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.325 21:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:22.325 21:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:22.325 [2024-09-29 21:38:41.075270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.264 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.265 "name": "raid_bdev1", 00:07:23.265 "uuid": "1fd1b562-4743-4e9e-9c57-0ba829efcc98", 00:07:23.265 "strip_size_kb": 64, 00:07:23.265 "state": "online", 00:07:23.265 "raid_level": "raid0", 00:07:23.265 "superblock": true, 00:07:23.265 "num_base_bdevs": 2, 00:07:23.265 "num_base_bdevs_discovered": 2, 00:07:23.265 "num_base_bdevs_operational": 2, 00:07:23.265 "base_bdevs_list": [ 00:07:23.265 { 00:07:23.265 "name": "BaseBdev1", 00:07:23.265 "uuid": "7e05bfb7-a6ef-518d-b330-8f7ed2ec90c1", 00:07:23.265 "is_configured": true, 00:07:23.265 "data_offset": 2048, 00:07:23.265 "data_size": 63488 00:07:23.265 }, 00:07:23.265 { 00:07:23.265 "name": "BaseBdev2", 00:07:23.265 "uuid": "22cd480a-d489-53e8-b5c5-2e084b5e8634", 00:07:23.265 "is_configured": true, 00:07:23.265 "data_offset": 2048, 00:07:23.265 "data_size": 63488 00:07:23.265 } 00:07:23.265 ] 00:07:23.265 }' 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.265 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.525 [2024-09-29 21:38:42.468025] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:23.525 [2024-09-29 21:38:42.468088] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.525 [2024-09-29 21:38:42.470834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.525 [2024-09-29 21:38:42.470932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.525 [2024-09-29 21:38:42.470987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.525 [2024-09-29 21:38:42.471045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:23.525 { 00:07:23.525 "results": [ 00:07:23.525 { 00:07:23.525 "job": "raid_bdev1", 00:07:23.525 "core_mask": "0x1", 00:07:23.525 "workload": "randrw", 00:07:23.525 "percentage": 50, 00:07:23.525 "status": "finished", 00:07:23.525 "queue_depth": 1, 00:07:23.525 "io_size": 131072, 00:07:23.525 "runtime": 1.393369, 00:07:23.525 "iops": 15367.788432209989, 00:07:23.525 "mibps": 1920.9735540262486, 00:07:23.525 "io_failed": 1, 00:07:23.525 "io_timeout": 0, 00:07:23.525 "avg_latency_us": 91.4060329466541, 00:07:23.525 "min_latency_us": 24.370305676855896, 00:07:23.525 "max_latency_us": 1380.8349344978167 00:07:23.525 } 00:07:23.525 ], 00:07:23.525 "core_count": 1 00:07:23.525 } 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61466 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61466 ']' 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61466 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.525 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61466 00:07:23.785 killing process with pid 61466 00:07:23.785 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.785 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.785 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61466' 00:07:23.785 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61466 00:07:23.785 [2024-09-29 21:38:42.520705] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.785 21:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61466 00:07:23.785 [2024-09-29 21:38:42.665238] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gCy60fIBbk 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:25.166 ************************************ 00:07:25.166 END TEST raid_read_error_test 00:07:25.166 ************************************ 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:25.166 00:07:25.166 real 0m4.556s 00:07:25.166 user 0m5.239s 00:07:25.166 sys 0m0.676s 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.166 21:38:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.166 21:38:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:25.166 21:38:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:25.166 21:38:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.166 21:38:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.166 ************************************ 00:07:25.166 START TEST raid_write_error_test 00:07:25.166 ************************************ 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:25.166 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UNyeIFk1Yx 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61616 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61616 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61616 ']' 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.167 21:38:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.427 [2024-09-29 21:38:44.224727] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:25.427 [2024-09-29 21:38:44.224843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61616 ] 00:07:25.427 [2024-09-29 21:38:44.393780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.687 [2024-09-29 21:38:44.640501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.947 [2024-09-29 21:38:44.867169] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.947 [2024-09-29 21:38:44.867306] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.207 BaseBdev1_malloc 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.207 true 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.207 [2024-09-29 21:38:45.114349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:26.207 [2024-09-29 21:38:45.114418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.207 [2024-09-29 21:38:45.114436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:26.207 [2024-09-29 21:38:45.114449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.207 [2024-09-29 21:38:45.116943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.207 [2024-09-29 21:38:45.116984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:26.207 BaseBdev1 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.207 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.468 BaseBdev2_malloc 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.468 true 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.468 [2024-09-29 21:38:45.216434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:26.468 [2024-09-29 21:38:45.216493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.468 [2024-09-29 21:38:45.216510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:26.468 [2024-09-29 21:38:45.216522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.468 [2024-09-29 21:38:45.218851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.468 [2024-09-29 21:38:45.218892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:26.468 BaseBdev2 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.468 [2024-09-29 21:38:45.228510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.468 [2024-09-29 21:38:45.230560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.468 [2024-09-29 21:38:45.230755] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.468 [2024-09-29 21:38:45.230769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:26.468 [2024-09-29 21:38:45.230997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.468 [2024-09-29 21:38:45.231191] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.468 [2024-09-29 21:38:45.231202] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:26.468 [2024-09-29 21:38:45.231384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.468 "name": "raid_bdev1", 00:07:26.468 "uuid": "69b66ca9-d2e2-4a53-ba9d-0b7e4ef9bf9c", 00:07:26.468 "strip_size_kb": 64, 00:07:26.468 "state": "online", 00:07:26.468 "raid_level": "raid0", 00:07:26.468 "superblock": true, 00:07:26.468 "num_base_bdevs": 2, 00:07:26.468 "num_base_bdevs_discovered": 2, 00:07:26.468 "num_base_bdevs_operational": 2, 00:07:26.468 "base_bdevs_list": [ 00:07:26.468 { 00:07:26.468 "name": "BaseBdev1", 00:07:26.468 "uuid": "9085759c-83c7-5a0d-a6c6-9ef5a44f0b9d", 00:07:26.468 "is_configured": true, 00:07:26.468 "data_offset": 2048, 00:07:26.468 "data_size": 63488 00:07:26.468 }, 00:07:26.468 { 00:07:26.468 "name": "BaseBdev2", 00:07:26.468 "uuid": "970b7756-22c3-51e4-a0c6-72d451226f10", 00:07:26.468 "is_configured": true, 00:07:26.468 "data_offset": 2048, 00:07:26.468 "data_size": 63488 00:07:26.468 } 00:07:26.468 ] 00:07:26.468 }' 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.468 21:38:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.728 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:26.728 21:38:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:26.988 [2024-09-29 21:38:45.721197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.928 "name": "raid_bdev1", 00:07:27.928 "uuid": "69b66ca9-d2e2-4a53-ba9d-0b7e4ef9bf9c", 00:07:27.928 "strip_size_kb": 64, 00:07:27.928 "state": "online", 00:07:27.928 "raid_level": "raid0", 00:07:27.928 "superblock": true, 00:07:27.928 "num_base_bdevs": 2, 00:07:27.928 "num_base_bdevs_discovered": 2, 00:07:27.928 "num_base_bdevs_operational": 2, 00:07:27.928 "base_bdevs_list": [ 00:07:27.928 { 00:07:27.928 "name": "BaseBdev1", 00:07:27.928 "uuid": "9085759c-83c7-5a0d-a6c6-9ef5a44f0b9d", 00:07:27.928 "is_configured": true, 00:07:27.928 "data_offset": 2048, 00:07:27.928 "data_size": 63488 00:07:27.928 }, 00:07:27.928 { 00:07:27.928 "name": "BaseBdev2", 00:07:27.928 "uuid": "970b7756-22c3-51e4-a0c6-72d451226f10", 00:07:27.928 "is_configured": true, 00:07:27.928 "data_offset": 2048, 00:07:27.928 "data_size": 63488 00:07:27.928 } 00:07:27.928 ] 00:07:27.928 }' 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.928 21:38:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.188 [2024-09-29 21:38:47.109880] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:28.188 [2024-09-29 21:38:47.110002] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.188 [2024-09-29 21:38:47.112721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.188 [2024-09-29 21:38:47.112816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.188 [2024-09-29 21:38:47.112871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.188 [2024-09-29 21:38:47.112914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:28.188 { 00:07:28.188 "results": [ 00:07:28.188 { 00:07:28.188 "job": "raid_bdev1", 00:07:28.188 "core_mask": "0x1", 00:07:28.188 "workload": "randrw", 00:07:28.188 "percentage": 50, 00:07:28.188 "status": "finished", 00:07:28.188 "queue_depth": 1, 00:07:28.188 "io_size": 131072, 00:07:28.188 "runtime": 1.389554, 00:07:28.188 "iops": 15169.615574493686, 00:07:28.188 "mibps": 1896.2019468117107, 00:07:28.188 "io_failed": 1, 00:07:28.188 "io_timeout": 0, 00:07:28.188 "avg_latency_us": 92.66284199100122, 00:07:28.188 "min_latency_us": 24.370305676855896, 00:07:28.188 "max_latency_us": 1380.8349344978167 00:07:28.188 } 00:07:28.188 ], 00:07:28.188 "core_count": 1 00:07:28.188 } 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61616 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61616 ']' 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61616 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61616 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61616' 00:07:28.188 killing process with pid 61616 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61616 00:07:28.188 [2024-09-29 21:38:47.158301] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.188 21:38:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61616 00:07:28.448 [2024-09-29 21:38:47.303388] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.869 21:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UNyeIFk1Yx 00:07:29.869 21:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:29.869 21:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:29.869 21:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:29.869 21:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:29.869 ************************************ 00:07:29.869 END TEST raid_write_error_test 00:07:29.869 ************************************ 00:07:29.870 21:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.870 21:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:29.870 21:38:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:29.870 00:07:29.870 real 0m4.592s 00:07:29.870 user 0m5.290s 00:07:29.870 sys 0m0.666s 00:07:29.870 21:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.870 21:38:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.870 21:38:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:29.870 21:38:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:29.870 21:38:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:29.870 21:38:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.870 21:38:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.870 ************************************ 00:07:29.870 START TEST raid_state_function_test 00:07:29.870 ************************************ 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61755 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61755' 00:07:29.870 Process raid pid: 61755 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61755 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61755 ']' 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.870 21:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.133 [2024-09-29 21:38:48.885366] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:30.133 [2024-09-29 21:38:48.885623] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.133 [2024-09-29 21:38:49.044779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.393 [2024-09-29 21:38:49.291449] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.652 [2024-09-29 21:38:49.528277] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.652 [2024-09-29 21:38:49.528394] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.911 21:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.912 [2024-09-29 21:38:49.716177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:30.912 [2024-09-29 21:38:49.716305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:30.912 [2024-09-29 21:38:49.716346] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:30.912 [2024-09-29 21:38:49.716387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.912 "name": "Existed_Raid", 00:07:30.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.912 "strip_size_kb": 64, 00:07:30.912 "state": "configuring", 00:07:30.912 "raid_level": "concat", 00:07:30.912 "superblock": false, 00:07:30.912 "num_base_bdevs": 2, 00:07:30.912 "num_base_bdevs_discovered": 0, 00:07:30.912 "num_base_bdevs_operational": 2, 00:07:30.912 "base_bdevs_list": [ 00:07:30.912 { 00:07:30.912 "name": "BaseBdev1", 00:07:30.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.912 "is_configured": false, 00:07:30.912 "data_offset": 0, 00:07:30.912 "data_size": 0 00:07:30.912 }, 00:07:30.912 { 00:07:30.912 "name": "BaseBdev2", 00:07:30.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.912 "is_configured": false, 00:07:30.912 "data_offset": 0, 00:07:30.912 "data_size": 0 00:07:30.912 } 00:07:30.912 ] 00:07:30.912 }' 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.912 21:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.172 [2024-09-29 21:38:50.135331] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:31.172 [2024-09-29 21:38:50.135417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.172 [2024-09-29 21:38:50.147331] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:31.172 [2024-09-29 21:38:50.147372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:31.172 [2024-09-29 21:38:50.147380] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.172 [2024-09-29 21:38:50.147392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.172 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.432 [2024-09-29 21:38:50.231237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.432 BaseBdev1 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.432 [ 00:07:31.432 { 00:07:31.432 "name": "BaseBdev1", 00:07:31.432 "aliases": [ 00:07:31.432 "19312d59-2c59-42f1-b63f-1f3f811b230f" 00:07:31.432 ], 00:07:31.432 "product_name": "Malloc disk", 00:07:31.432 "block_size": 512, 00:07:31.432 "num_blocks": 65536, 00:07:31.432 "uuid": "19312d59-2c59-42f1-b63f-1f3f811b230f", 00:07:31.432 "assigned_rate_limits": { 00:07:31.432 "rw_ios_per_sec": 0, 00:07:31.432 "rw_mbytes_per_sec": 0, 00:07:31.432 "r_mbytes_per_sec": 0, 00:07:31.432 "w_mbytes_per_sec": 0 00:07:31.432 }, 00:07:31.432 "claimed": true, 00:07:31.432 "claim_type": "exclusive_write", 00:07:31.432 "zoned": false, 00:07:31.432 "supported_io_types": { 00:07:31.432 "read": true, 00:07:31.432 "write": true, 00:07:31.432 "unmap": true, 00:07:31.432 "flush": true, 00:07:31.432 "reset": true, 00:07:31.432 "nvme_admin": false, 00:07:31.432 "nvme_io": false, 00:07:31.432 "nvme_io_md": false, 00:07:31.432 "write_zeroes": true, 00:07:31.432 "zcopy": true, 00:07:31.432 "get_zone_info": false, 00:07:31.432 "zone_management": false, 00:07:31.432 "zone_append": false, 00:07:31.432 "compare": false, 00:07:31.432 "compare_and_write": false, 00:07:31.432 "abort": true, 00:07:31.432 "seek_hole": false, 00:07:31.432 "seek_data": false, 00:07:31.432 "copy": true, 00:07:31.432 "nvme_iov_md": false 00:07:31.432 }, 00:07:31.432 "memory_domains": [ 00:07:31.432 { 00:07:31.432 "dma_device_id": "system", 00:07:31.432 "dma_device_type": 1 00:07:31.432 }, 00:07:31.432 { 00:07:31.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.432 "dma_device_type": 2 00:07:31.432 } 00:07:31.432 ], 00:07:31.432 "driver_specific": {} 00:07:31.432 } 00:07:31.432 ] 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.432 "name": "Existed_Raid", 00:07:31.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.432 "strip_size_kb": 64, 00:07:31.432 "state": "configuring", 00:07:31.432 "raid_level": "concat", 00:07:31.432 "superblock": false, 00:07:31.432 "num_base_bdevs": 2, 00:07:31.432 "num_base_bdevs_discovered": 1, 00:07:31.432 "num_base_bdevs_operational": 2, 00:07:31.432 "base_bdevs_list": [ 00:07:31.432 { 00:07:31.432 "name": "BaseBdev1", 00:07:31.432 "uuid": "19312d59-2c59-42f1-b63f-1f3f811b230f", 00:07:31.432 "is_configured": true, 00:07:31.432 "data_offset": 0, 00:07:31.432 "data_size": 65536 00:07:31.432 }, 00:07:31.432 { 00:07:31.432 "name": "BaseBdev2", 00:07:31.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.432 "is_configured": false, 00:07:31.432 "data_offset": 0, 00:07:31.432 "data_size": 0 00:07:31.432 } 00:07:31.432 ] 00:07:31.432 }' 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.432 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.692 [2024-09-29 21:38:50.658485] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:31.692 [2024-09-29 21:38:50.658567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.692 [2024-09-29 21:38:50.666511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.692 [2024-09-29 21:38:50.668607] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.692 [2024-09-29 21:38:50.668704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.692 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.951 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.951 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.951 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.951 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.951 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.951 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.951 "name": "Existed_Raid", 00:07:31.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.951 "strip_size_kb": 64, 00:07:31.951 "state": "configuring", 00:07:31.951 "raid_level": "concat", 00:07:31.951 "superblock": false, 00:07:31.951 "num_base_bdevs": 2, 00:07:31.951 "num_base_bdevs_discovered": 1, 00:07:31.951 "num_base_bdevs_operational": 2, 00:07:31.951 "base_bdevs_list": [ 00:07:31.951 { 00:07:31.951 "name": "BaseBdev1", 00:07:31.951 "uuid": "19312d59-2c59-42f1-b63f-1f3f811b230f", 00:07:31.951 "is_configured": true, 00:07:31.951 "data_offset": 0, 00:07:31.951 "data_size": 65536 00:07:31.951 }, 00:07:31.951 { 00:07:31.951 "name": "BaseBdev2", 00:07:31.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.951 "is_configured": false, 00:07:31.951 "data_offset": 0, 00:07:31.951 "data_size": 0 00:07:31.951 } 00:07:31.951 ] 00:07:31.951 }' 00:07:31.951 21:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.951 21:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.211 [2024-09-29 21:38:51.182358] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.211 [2024-09-29 21:38:51.182407] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:32.211 [2024-09-29 21:38:51.182416] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:32.211 [2024-09-29 21:38:51.182707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:32.211 [2024-09-29 21:38:51.182881] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:32.211 [2024-09-29 21:38:51.182893] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:32.211 [2024-09-29 21:38:51.183197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.211 BaseBdev2 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.211 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.470 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.470 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:32.470 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.470 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.470 [ 00:07:32.470 { 00:07:32.470 "name": "BaseBdev2", 00:07:32.470 "aliases": [ 00:07:32.471 "a77d2393-a449-4ccf-86d8-1f426526df4d" 00:07:32.471 ], 00:07:32.471 "product_name": "Malloc disk", 00:07:32.471 "block_size": 512, 00:07:32.471 "num_blocks": 65536, 00:07:32.471 "uuid": "a77d2393-a449-4ccf-86d8-1f426526df4d", 00:07:32.471 "assigned_rate_limits": { 00:07:32.471 "rw_ios_per_sec": 0, 00:07:32.471 "rw_mbytes_per_sec": 0, 00:07:32.471 "r_mbytes_per_sec": 0, 00:07:32.471 "w_mbytes_per_sec": 0 00:07:32.471 }, 00:07:32.471 "claimed": true, 00:07:32.471 "claim_type": "exclusive_write", 00:07:32.471 "zoned": false, 00:07:32.471 "supported_io_types": { 00:07:32.471 "read": true, 00:07:32.471 "write": true, 00:07:32.471 "unmap": true, 00:07:32.471 "flush": true, 00:07:32.471 "reset": true, 00:07:32.471 "nvme_admin": false, 00:07:32.471 "nvme_io": false, 00:07:32.471 "nvme_io_md": false, 00:07:32.471 "write_zeroes": true, 00:07:32.471 "zcopy": true, 00:07:32.471 "get_zone_info": false, 00:07:32.471 "zone_management": false, 00:07:32.471 "zone_append": false, 00:07:32.471 "compare": false, 00:07:32.471 "compare_and_write": false, 00:07:32.471 "abort": true, 00:07:32.471 "seek_hole": false, 00:07:32.471 "seek_data": false, 00:07:32.471 "copy": true, 00:07:32.471 "nvme_iov_md": false 00:07:32.471 }, 00:07:32.471 "memory_domains": [ 00:07:32.471 { 00:07:32.471 "dma_device_id": "system", 00:07:32.471 "dma_device_type": 1 00:07:32.471 }, 00:07:32.471 { 00:07:32.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.471 "dma_device_type": 2 00:07:32.471 } 00:07:32.471 ], 00:07:32.471 "driver_specific": {} 00:07:32.471 } 00:07:32.471 ] 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.471 "name": "Existed_Raid", 00:07:32.471 "uuid": "127ba0eb-7c8e-4900-acba-9209042ddfea", 00:07:32.471 "strip_size_kb": 64, 00:07:32.471 "state": "online", 00:07:32.471 "raid_level": "concat", 00:07:32.471 "superblock": false, 00:07:32.471 "num_base_bdevs": 2, 00:07:32.471 "num_base_bdevs_discovered": 2, 00:07:32.471 "num_base_bdevs_operational": 2, 00:07:32.471 "base_bdevs_list": [ 00:07:32.471 { 00:07:32.471 "name": "BaseBdev1", 00:07:32.471 "uuid": "19312d59-2c59-42f1-b63f-1f3f811b230f", 00:07:32.471 "is_configured": true, 00:07:32.471 "data_offset": 0, 00:07:32.471 "data_size": 65536 00:07:32.471 }, 00:07:32.471 { 00:07:32.471 "name": "BaseBdev2", 00:07:32.471 "uuid": "a77d2393-a449-4ccf-86d8-1f426526df4d", 00:07:32.471 "is_configured": true, 00:07:32.471 "data_offset": 0, 00:07:32.471 "data_size": 65536 00:07:32.471 } 00:07:32.471 ] 00:07:32.471 }' 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.471 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.731 [2024-09-29 21:38:51.637867] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.731 "name": "Existed_Raid", 00:07:32.731 "aliases": [ 00:07:32.731 "127ba0eb-7c8e-4900-acba-9209042ddfea" 00:07:32.731 ], 00:07:32.731 "product_name": "Raid Volume", 00:07:32.731 "block_size": 512, 00:07:32.731 "num_blocks": 131072, 00:07:32.731 "uuid": "127ba0eb-7c8e-4900-acba-9209042ddfea", 00:07:32.731 "assigned_rate_limits": { 00:07:32.731 "rw_ios_per_sec": 0, 00:07:32.731 "rw_mbytes_per_sec": 0, 00:07:32.731 "r_mbytes_per_sec": 0, 00:07:32.731 "w_mbytes_per_sec": 0 00:07:32.731 }, 00:07:32.731 "claimed": false, 00:07:32.731 "zoned": false, 00:07:32.731 "supported_io_types": { 00:07:32.731 "read": true, 00:07:32.731 "write": true, 00:07:32.731 "unmap": true, 00:07:32.731 "flush": true, 00:07:32.731 "reset": true, 00:07:32.731 "nvme_admin": false, 00:07:32.731 "nvme_io": false, 00:07:32.731 "nvme_io_md": false, 00:07:32.731 "write_zeroes": true, 00:07:32.731 "zcopy": false, 00:07:32.731 "get_zone_info": false, 00:07:32.731 "zone_management": false, 00:07:32.731 "zone_append": false, 00:07:32.731 "compare": false, 00:07:32.731 "compare_and_write": false, 00:07:32.731 "abort": false, 00:07:32.731 "seek_hole": false, 00:07:32.731 "seek_data": false, 00:07:32.731 "copy": false, 00:07:32.731 "nvme_iov_md": false 00:07:32.731 }, 00:07:32.731 "memory_domains": [ 00:07:32.731 { 00:07:32.731 "dma_device_id": "system", 00:07:32.731 "dma_device_type": 1 00:07:32.731 }, 00:07:32.731 { 00:07:32.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.731 "dma_device_type": 2 00:07:32.731 }, 00:07:32.731 { 00:07:32.731 "dma_device_id": "system", 00:07:32.731 "dma_device_type": 1 00:07:32.731 }, 00:07:32.731 { 00:07:32.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.731 "dma_device_type": 2 00:07:32.731 } 00:07:32.731 ], 00:07:32.731 "driver_specific": { 00:07:32.731 "raid": { 00:07:32.731 "uuid": "127ba0eb-7c8e-4900-acba-9209042ddfea", 00:07:32.731 "strip_size_kb": 64, 00:07:32.731 "state": "online", 00:07:32.731 "raid_level": "concat", 00:07:32.731 "superblock": false, 00:07:32.731 "num_base_bdevs": 2, 00:07:32.731 "num_base_bdevs_discovered": 2, 00:07:32.731 "num_base_bdevs_operational": 2, 00:07:32.731 "base_bdevs_list": [ 00:07:32.731 { 00:07:32.731 "name": "BaseBdev1", 00:07:32.731 "uuid": "19312d59-2c59-42f1-b63f-1f3f811b230f", 00:07:32.731 "is_configured": true, 00:07:32.731 "data_offset": 0, 00:07:32.731 "data_size": 65536 00:07:32.731 }, 00:07:32.731 { 00:07:32.731 "name": "BaseBdev2", 00:07:32.731 "uuid": "a77d2393-a449-4ccf-86d8-1f426526df4d", 00:07:32.731 "is_configured": true, 00:07:32.731 "data_offset": 0, 00:07:32.731 "data_size": 65536 00:07:32.731 } 00:07:32.731 ] 00:07:32.731 } 00:07:32.731 } 00:07:32.731 }' 00:07:32.731 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:32.992 BaseBdev2' 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.992 [2024-09-29 21:38:51.865264] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:32.992 [2024-09-29 21:38:51.865296] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.992 [2024-09-29 21:38:51.865346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.992 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.252 21:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.252 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.252 "name": "Existed_Raid", 00:07:33.252 "uuid": "127ba0eb-7c8e-4900-acba-9209042ddfea", 00:07:33.252 "strip_size_kb": 64, 00:07:33.252 "state": "offline", 00:07:33.252 "raid_level": "concat", 00:07:33.252 "superblock": false, 00:07:33.252 "num_base_bdevs": 2, 00:07:33.252 "num_base_bdevs_discovered": 1, 00:07:33.252 "num_base_bdevs_operational": 1, 00:07:33.252 "base_bdevs_list": [ 00:07:33.252 { 00:07:33.252 "name": null, 00:07:33.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.252 "is_configured": false, 00:07:33.252 "data_offset": 0, 00:07:33.252 "data_size": 65536 00:07:33.252 }, 00:07:33.252 { 00:07:33.252 "name": "BaseBdev2", 00:07:33.252 "uuid": "a77d2393-a449-4ccf-86d8-1f426526df4d", 00:07:33.252 "is_configured": true, 00:07:33.252 "data_offset": 0, 00:07:33.252 "data_size": 65536 00:07:33.252 } 00:07:33.252 ] 00:07:33.252 }' 00:07:33.252 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.252 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.512 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.512 [2024-09-29 21:38:52.418504] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:33.512 [2024-09-29 21:38:52.418563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61755 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61755 ']' 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61755 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61755 00:07:33.772 killing process with pid 61755 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61755' 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61755 00:07:33.772 [2024-09-29 21:38:52.621819] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.772 21:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61755 00:07:33.772 [2024-09-29 21:38:52.638930] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.151 21:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.151 00:07:35.151 real 0m5.181s 00:07:35.151 user 0m7.175s 00:07:35.151 sys 0m0.943s 00:07:35.151 ************************************ 00:07:35.151 END TEST raid_state_function_test 00:07:35.151 ************************************ 00:07:35.151 21:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.151 21:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.151 21:38:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:35.151 21:38:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:35.151 21:38:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.151 21:38:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.151 ************************************ 00:07:35.151 START TEST raid_state_function_test_sb 00:07:35.151 ************************************ 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62008 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.151 Process raid pid: 62008 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62008' 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62008 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62008 ']' 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.151 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.411 [2024-09-29 21:38:54.142813] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:35.411 [2024-09-29 21:38:54.142943] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.411 [2024-09-29 21:38:54.311140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.727 [2024-09-29 21:38:54.550658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.984 [2024-09-29 21:38:54.778931] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.984 [2024-09-29 21:38:54.779102] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.984 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.984 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:35.984 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.984 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.984 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.243 [2024-09-29 21:38:54.970651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.243 [2024-09-29 21:38:54.970713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.243 [2024-09-29 21:38:54.970723] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.243 [2024-09-29 21:38:54.970733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.243 21:38:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.243 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.243 "name": "Existed_Raid", 00:07:36.243 "uuid": "c84243cf-d24f-4c10-b952-1d2a6e57fe2b", 00:07:36.243 "strip_size_kb": 64, 00:07:36.243 "state": "configuring", 00:07:36.243 "raid_level": "concat", 00:07:36.243 "superblock": true, 00:07:36.243 "num_base_bdevs": 2, 00:07:36.243 "num_base_bdevs_discovered": 0, 00:07:36.243 "num_base_bdevs_operational": 2, 00:07:36.243 "base_bdevs_list": [ 00:07:36.243 { 00:07:36.243 "name": "BaseBdev1", 00:07:36.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.243 "is_configured": false, 00:07:36.243 "data_offset": 0, 00:07:36.243 "data_size": 0 00:07:36.243 }, 00:07:36.243 { 00:07:36.243 "name": "BaseBdev2", 00:07:36.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.243 "is_configured": false, 00:07:36.243 "data_offset": 0, 00:07:36.243 "data_size": 0 00:07:36.243 } 00:07:36.243 ] 00:07:36.243 }' 00:07:36.243 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.243 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.502 [2024-09-29 21:38:55.401785] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.502 [2024-09-29 21:38:55.401889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.502 [2024-09-29 21:38:55.413800] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.502 [2024-09-29 21:38:55.413893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.502 [2024-09-29 21:38:55.413920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.502 [2024-09-29 21:38:55.413946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.502 [2024-09-29 21:38:55.478403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.502 BaseBdev1 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.502 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.761 [ 00:07:36.761 { 00:07:36.761 "name": "BaseBdev1", 00:07:36.761 "aliases": [ 00:07:36.761 "11eb9cb9-aaf3-490b-822f-c7e6a424a9fe" 00:07:36.761 ], 00:07:36.761 "product_name": "Malloc disk", 00:07:36.761 "block_size": 512, 00:07:36.761 "num_blocks": 65536, 00:07:36.761 "uuid": "11eb9cb9-aaf3-490b-822f-c7e6a424a9fe", 00:07:36.761 "assigned_rate_limits": { 00:07:36.761 "rw_ios_per_sec": 0, 00:07:36.761 "rw_mbytes_per_sec": 0, 00:07:36.761 "r_mbytes_per_sec": 0, 00:07:36.761 "w_mbytes_per_sec": 0 00:07:36.761 }, 00:07:36.761 "claimed": true, 00:07:36.761 "claim_type": "exclusive_write", 00:07:36.761 "zoned": false, 00:07:36.761 "supported_io_types": { 00:07:36.761 "read": true, 00:07:36.761 "write": true, 00:07:36.761 "unmap": true, 00:07:36.761 "flush": true, 00:07:36.761 "reset": true, 00:07:36.761 "nvme_admin": false, 00:07:36.761 "nvme_io": false, 00:07:36.761 "nvme_io_md": false, 00:07:36.761 "write_zeroes": true, 00:07:36.761 "zcopy": true, 00:07:36.761 "get_zone_info": false, 00:07:36.761 "zone_management": false, 00:07:36.761 "zone_append": false, 00:07:36.761 "compare": false, 00:07:36.761 "compare_and_write": false, 00:07:36.761 "abort": true, 00:07:36.761 "seek_hole": false, 00:07:36.761 "seek_data": false, 00:07:36.761 "copy": true, 00:07:36.761 "nvme_iov_md": false 00:07:36.761 }, 00:07:36.761 "memory_domains": [ 00:07:36.761 { 00:07:36.761 "dma_device_id": "system", 00:07:36.761 "dma_device_type": 1 00:07:36.761 }, 00:07:36.761 { 00:07:36.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.761 "dma_device_type": 2 00:07:36.761 } 00:07:36.761 ], 00:07:36.761 "driver_specific": {} 00:07:36.761 } 00:07:36.761 ] 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.761 "name": "Existed_Raid", 00:07:36.761 "uuid": "c85dd1ca-07c2-4c04-b8bd-3b4297f77a95", 00:07:36.761 "strip_size_kb": 64, 00:07:36.761 "state": "configuring", 00:07:36.761 "raid_level": "concat", 00:07:36.761 "superblock": true, 00:07:36.761 "num_base_bdevs": 2, 00:07:36.761 "num_base_bdevs_discovered": 1, 00:07:36.761 "num_base_bdevs_operational": 2, 00:07:36.761 "base_bdevs_list": [ 00:07:36.761 { 00:07:36.761 "name": "BaseBdev1", 00:07:36.761 "uuid": "11eb9cb9-aaf3-490b-822f-c7e6a424a9fe", 00:07:36.761 "is_configured": true, 00:07:36.761 "data_offset": 2048, 00:07:36.761 "data_size": 63488 00:07:36.761 }, 00:07:36.761 { 00:07:36.761 "name": "BaseBdev2", 00:07:36.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.761 "is_configured": false, 00:07:36.761 "data_offset": 0, 00:07:36.761 "data_size": 0 00:07:36.761 } 00:07:36.761 ] 00:07:36.761 }' 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.761 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.019 [2024-09-29 21:38:55.917650] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.019 [2024-09-29 21:38:55.917692] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.019 [2024-09-29 21:38:55.929698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.019 [2024-09-29 21:38:55.931860] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.019 [2024-09-29 21:38:55.931904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.019 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.019 "name": "Existed_Raid", 00:07:37.019 "uuid": "971bf58d-cbc2-4667-8a28-0a2290607c46", 00:07:37.019 "strip_size_kb": 64, 00:07:37.019 "state": "configuring", 00:07:37.019 "raid_level": "concat", 00:07:37.019 "superblock": true, 00:07:37.020 "num_base_bdevs": 2, 00:07:37.020 "num_base_bdevs_discovered": 1, 00:07:37.020 "num_base_bdevs_operational": 2, 00:07:37.020 "base_bdevs_list": [ 00:07:37.020 { 00:07:37.020 "name": "BaseBdev1", 00:07:37.020 "uuid": "11eb9cb9-aaf3-490b-822f-c7e6a424a9fe", 00:07:37.020 "is_configured": true, 00:07:37.020 "data_offset": 2048, 00:07:37.020 "data_size": 63488 00:07:37.020 }, 00:07:37.020 { 00:07:37.020 "name": "BaseBdev2", 00:07:37.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.020 "is_configured": false, 00:07:37.020 "data_offset": 0, 00:07:37.020 "data_size": 0 00:07:37.020 } 00:07:37.020 ] 00:07:37.020 }' 00:07:37.020 21:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.020 21:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.588 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.588 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.588 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.588 [2024-09-29 21:38:56.387948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.588 [2024-09-29 21:38:56.388319] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.588 [2024-09-29 21:38:56.388387] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.588 [2024-09-29 21:38:56.388700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.588 [2024-09-29 21:38:56.388902] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.588 BaseBdev2 00:07:37.588 [2024-09-29 21:38:56.388950] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:37.588 [2024-09-29 21:38:56.389143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.588 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.588 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.589 [ 00:07:37.589 { 00:07:37.589 "name": "BaseBdev2", 00:07:37.589 "aliases": [ 00:07:37.589 "09e2579b-9254-466f-bf29-fcc0772c4ce0" 00:07:37.589 ], 00:07:37.589 "product_name": "Malloc disk", 00:07:37.589 "block_size": 512, 00:07:37.589 "num_blocks": 65536, 00:07:37.589 "uuid": "09e2579b-9254-466f-bf29-fcc0772c4ce0", 00:07:37.589 "assigned_rate_limits": { 00:07:37.589 "rw_ios_per_sec": 0, 00:07:37.589 "rw_mbytes_per_sec": 0, 00:07:37.589 "r_mbytes_per_sec": 0, 00:07:37.589 "w_mbytes_per_sec": 0 00:07:37.589 }, 00:07:37.589 "claimed": true, 00:07:37.589 "claim_type": "exclusive_write", 00:07:37.589 "zoned": false, 00:07:37.589 "supported_io_types": { 00:07:37.589 "read": true, 00:07:37.589 "write": true, 00:07:37.589 "unmap": true, 00:07:37.589 "flush": true, 00:07:37.589 "reset": true, 00:07:37.589 "nvme_admin": false, 00:07:37.589 "nvme_io": false, 00:07:37.589 "nvme_io_md": false, 00:07:37.589 "write_zeroes": true, 00:07:37.589 "zcopy": true, 00:07:37.589 "get_zone_info": false, 00:07:37.589 "zone_management": false, 00:07:37.589 "zone_append": false, 00:07:37.589 "compare": false, 00:07:37.589 "compare_and_write": false, 00:07:37.589 "abort": true, 00:07:37.589 "seek_hole": false, 00:07:37.589 "seek_data": false, 00:07:37.589 "copy": true, 00:07:37.589 "nvme_iov_md": false 00:07:37.589 }, 00:07:37.589 "memory_domains": [ 00:07:37.589 { 00:07:37.589 "dma_device_id": "system", 00:07:37.589 "dma_device_type": 1 00:07:37.589 }, 00:07:37.589 { 00:07:37.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.589 "dma_device_type": 2 00:07:37.589 } 00:07:37.589 ], 00:07:37.589 "driver_specific": {} 00:07:37.589 } 00:07:37.589 ] 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.589 "name": "Existed_Raid", 00:07:37.589 "uuid": "971bf58d-cbc2-4667-8a28-0a2290607c46", 00:07:37.589 "strip_size_kb": 64, 00:07:37.589 "state": "online", 00:07:37.589 "raid_level": "concat", 00:07:37.589 "superblock": true, 00:07:37.589 "num_base_bdevs": 2, 00:07:37.589 "num_base_bdevs_discovered": 2, 00:07:37.589 "num_base_bdevs_operational": 2, 00:07:37.589 "base_bdevs_list": [ 00:07:37.589 { 00:07:37.589 "name": "BaseBdev1", 00:07:37.589 "uuid": "11eb9cb9-aaf3-490b-822f-c7e6a424a9fe", 00:07:37.589 "is_configured": true, 00:07:37.589 "data_offset": 2048, 00:07:37.589 "data_size": 63488 00:07:37.589 }, 00:07:37.589 { 00:07:37.589 "name": "BaseBdev2", 00:07:37.589 "uuid": "09e2579b-9254-466f-bf29-fcc0772c4ce0", 00:07:37.589 "is_configured": true, 00:07:37.589 "data_offset": 2048, 00:07:37.589 "data_size": 63488 00:07:37.589 } 00:07:37.589 ] 00:07:37.589 }' 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.589 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.849 [2024-09-29 21:38:56.807485] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.849 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.109 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.109 "name": "Existed_Raid", 00:07:38.109 "aliases": [ 00:07:38.109 "971bf58d-cbc2-4667-8a28-0a2290607c46" 00:07:38.109 ], 00:07:38.109 "product_name": "Raid Volume", 00:07:38.109 "block_size": 512, 00:07:38.109 "num_blocks": 126976, 00:07:38.109 "uuid": "971bf58d-cbc2-4667-8a28-0a2290607c46", 00:07:38.109 "assigned_rate_limits": { 00:07:38.109 "rw_ios_per_sec": 0, 00:07:38.109 "rw_mbytes_per_sec": 0, 00:07:38.109 "r_mbytes_per_sec": 0, 00:07:38.109 "w_mbytes_per_sec": 0 00:07:38.109 }, 00:07:38.109 "claimed": false, 00:07:38.109 "zoned": false, 00:07:38.109 "supported_io_types": { 00:07:38.109 "read": true, 00:07:38.109 "write": true, 00:07:38.109 "unmap": true, 00:07:38.109 "flush": true, 00:07:38.109 "reset": true, 00:07:38.109 "nvme_admin": false, 00:07:38.109 "nvme_io": false, 00:07:38.109 "nvme_io_md": false, 00:07:38.109 "write_zeroes": true, 00:07:38.109 "zcopy": false, 00:07:38.109 "get_zone_info": false, 00:07:38.109 "zone_management": false, 00:07:38.110 "zone_append": false, 00:07:38.110 "compare": false, 00:07:38.110 "compare_and_write": false, 00:07:38.110 "abort": false, 00:07:38.110 "seek_hole": false, 00:07:38.110 "seek_data": false, 00:07:38.110 "copy": false, 00:07:38.110 "nvme_iov_md": false 00:07:38.110 }, 00:07:38.110 "memory_domains": [ 00:07:38.110 { 00:07:38.110 "dma_device_id": "system", 00:07:38.110 "dma_device_type": 1 00:07:38.110 }, 00:07:38.110 { 00:07:38.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.110 "dma_device_type": 2 00:07:38.110 }, 00:07:38.110 { 00:07:38.110 "dma_device_id": "system", 00:07:38.110 "dma_device_type": 1 00:07:38.110 }, 00:07:38.110 { 00:07:38.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.110 "dma_device_type": 2 00:07:38.110 } 00:07:38.110 ], 00:07:38.110 "driver_specific": { 00:07:38.110 "raid": { 00:07:38.110 "uuid": "971bf58d-cbc2-4667-8a28-0a2290607c46", 00:07:38.110 "strip_size_kb": 64, 00:07:38.110 "state": "online", 00:07:38.110 "raid_level": "concat", 00:07:38.110 "superblock": true, 00:07:38.110 "num_base_bdevs": 2, 00:07:38.110 "num_base_bdevs_discovered": 2, 00:07:38.110 "num_base_bdevs_operational": 2, 00:07:38.110 "base_bdevs_list": [ 00:07:38.110 { 00:07:38.110 "name": "BaseBdev1", 00:07:38.110 "uuid": "11eb9cb9-aaf3-490b-822f-c7e6a424a9fe", 00:07:38.110 "is_configured": true, 00:07:38.110 "data_offset": 2048, 00:07:38.110 "data_size": 63488 00:07:38.110 }, 00:07:38.110 { 00:07:38.110 "name": "BaseBdev2", 00:07:38.110 "uuid": "09e2579b-9254-466f-bf29-fcc0772c4ce0", 00:07:38.110 "is_configured": true, 00:07:38.110 "data_offset": 2048, 00:07:38.110 "data_size": 63488 00:07:38.110 } 00:07:38.110 ] 00:07:38.110 } 00:07:38.110 } 00:07:38.110 }' 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.110 BaseBdev2' 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.110 21:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.110 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.110 [2024-09-29 21:38:57.058918] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.110 [2024-09-29 21:38:57.058952] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.110 [2024-09-29 21:38:57.058997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:38.369 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.370 "name": "Existed_Raid", 00:07:38.370 "uuid": "971bf58d-cbc2-4667-8a28-0a2290607c46", 00:07:38.370 "strip_size_kb": 64, 00:07:38.370 "state": "offline", 00:07:38.370 "raid_level": "concat", 00:07:38.370 "superblock": true, 00:07:38.370 "num_base_bdevs": 2, 00:07:38.370 "num_base_bdevs_discovered": 1, 00:07:38.370 "num_base_bdevs_operational": 1, 00:07:38.370 "base_bdevs_list": [ 00:07:38.370 { 00:07:38.370 "name": null, 00:07:38.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.370 "is_configured": false, 00:07:38.370 "data_offset": 0, 00:07:38.370 "data_size": 63488 00:07:38.370 }, 00:07:38.370 { 00:07:38.370 "name": "BaseBdev2", 00:07:38.370 "uuid": "09e2579b-9254-466f-bf29-fcc0772c4ce0", 00:07:38.370 "is_configured": true, 00:07:38.370 "data_offset": 2048, 00:07:38.370 "data_size": 63488 00:07:38.370 } 00:07:38.370 ] 00:07:38.370 }' 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.370 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.629 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.889 [2024-09-29 21:38:57.614167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:38.889 [2024-09-29 21:38:57.614276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62008 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62008 ']' 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62008 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62008 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62008' 00:07:38.889 killing process with pid 62008 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62008 00:07:38.889 [2024-09-29 21:38:57.806939] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.889 21:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62008 00:07:38.889 [2024-09-29 21:38:57.824057] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.272 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:40.272 00:07:40.272 real 0m5.108s 00:07:40.272 user 0m7.052s 00:07:40.272 sys 0m0.923s 00:07:40.272 ************************************ 00:07:40.272 END TEST raid_state_function_test_sb 00:07:40.272 ************************************ 00:07:40.272 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.272 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.272 21:38:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:40.272 21:38:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:40.272 21:38:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.272 21:38:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.272 ************************************ 00:07:40.272 START TEST raid_superblock_test 00:07:40.272 ************************************ 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62260 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62260 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62260 ']' 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.272 21:38:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.531 [2024-09-29 21:38:59.318424] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:40.531 [2024-09-29 21:38:59.318649] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62260 ] 00:07:40.531 [2024-09-29 21:38:59.490319] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.790 [2024-09-29 21:38:59.729564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.050 [2024-09-29 21:38:59.962105] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.050 [2024-09-29 21:38:59.962239] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.309 malloc1 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.309 [2024-09-29 21:39:00.203382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:41.309 [2024-09-29 21:39:00.203506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.309 [2024-09-29 21:39:00.203550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:41.309 [2024-09-29 21:39:00.203582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.309 [2024-09-29 21:39:00.205991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.309 [2024-09-29 21:39:00.206094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:41.309 pt1 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:41.309 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:41.310 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:41.310 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:41.310 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:41.310 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:41.310 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:41.310 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:41.310 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.310 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.569 malloc2 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.569 [2024-09-29 21:39:00.303800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:41.569 [2024-09-29 21:39:00.303860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.569 [2024-09-29 21:39:00.303884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:41.569 [2024-09-29 21:39:00.303894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.569 [2024-09-29 21:39:00.306309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.569 [2024-09-29 21:39:00.306345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:41.569 pt2 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.569 [2024-09-29 21:39:00.315855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:41.569 [2024-09-29 21:39:00.317962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:41.569 [2024-09-29 21:39:00.318150] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:41.569 [2024-09-29 21:39:00.318165] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:41.569 [2024-09-29 21:39:00.318400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:41.569 [2024-09-29 21:39:00.318562] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:41.569 [2024-09-29 21:39:00.318580] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:41.569 [2024-09-29 21:39:00.318729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.569 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.570 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.570 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.570 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.570 "name": "raid_bdev1", 00:07:41.570 "uuid": "ed827eab-0fdd-4b69-83a6-235d611971fc", 00:07:41.570 "strip_size_kb": 64, 00:07:41.570 "state": "online", 00:07:41.570 "raid_level": "concat", 00:07:41.570 "superblock": true, 00:07:41.570 "num_base_bdevs": 2, 00:07:41.570 "num_base_bdevs_discovered": 2, 00:07:41.570 "num_base_bdevs_operational": 2, 00:07:41.570 "base_bdevs_list": [ 00:07:41.570 { 00:07:41.570 "name": "pt1", 00:07:41.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.570 "is_configured": true, 00:07:41.570 "data_offset": 2048, 00:07:41.570 "data_size": 63488 00:07:41.570 }, 00:07:41.570 { 00:07:41.570 "name": "pt2", 00:07:41.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.570 "is_configured": true, 00:07:41.570 "data_offset": 2048, 00:07:41.570 "data_size": 63488 00:07:41.570 } 00:07:41.570 ] 00:07:41.570 }' 00:07:41.570 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.570 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.829 [2024-09-29 21:39:00.695371] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.829 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:41.829 "name": "raid_bdev1", 00:07:41.829 "aliases": [ 00:07:41.829 "ed827eab-0fdd-4b69-83a6-235d611971fc" 00:07:41.829 ], 00:07:41.829 "product_name": "Raid Volume", 00:07:41.829 "block_size": 512, 00:07:41.829 "num_blocks": 126976, 00:07:41.829 "uuid": "ed827eab-0fdd-4b69-83a6-235d611971fc", 00:07:41.829 "assigned_rate_limits": { 00:07:41.829 "rw_ios_per_sec": 0, 00:07:41.829 "rw_mbytes_per_sec": 0, 00:07:41.829 "r_mbytes_per_sec": 0, 00:07:41.829 "w_mbytes_per_sec": 0 00:07:41.829 }, 00:07:41.829 "claimed": false, 00:07:41.829 "zoned": false, 00:07:41.829 "supported_io_types": { 00:07:41.829 "read": true, 00:07:41.829 "write": true, 00:07:41.830 "unmap": true, 00:07:41.830 "flush": true, 00:07:41.830 "reset": true, 00:07:41.830 "nvme_admin": false, 00:07:41.830 "nvme_io": false, 00:07:41.830 "nvme_io_md": false, 00:07:41.830 "write_zeroes": true, 00:07:41.830 "zcopy": false, 00:07:41.830 "get_zone_info": false, 00:07:41.830 "zone_management": false, 00:07:41.830 "zone_append": false, 00:07:41.830 "compare": false, 00:07:41.830 "compare_and_write": false, 00:07:41.830 "abort": false, 00:07:41.830 "seek_hole": false, 00:07:41.830 "seek_data": false, 00:07:41.830 "copy": false, 00:07:41.830 "nvme_iov_md": false 00:07:41.830 }, 00:07:41.830 "memory_domains": [ 00:07:41.830 { 00:07:41.830 "dma_device_id": "system", 00:07:41.830 "dma_device_type": 1 00:07:41.830 }, 00:07:41.830 { 00:07:41.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.830 "dma_device_type": 2 00:07:41.830 }, 00:07:41.830 { 00:07:41.830 "dma_device_id": "system", 00:07:41.830 "dma_device_type": 1 00:07:41.830 }, 00:07:41.830 { 00:07:41.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.830 "dma_device_type": 2 00:07:41.830 } 00:07:41.830 ], 00:07:41.830 "driver_specific": { 00:07:41.830 "raid": { 00:07:41.830 "uuid": "ed827eab-0fdd-4b69-83a6-235d611971fc", 00:07:41.830 "strip_size_kb": 64, 00:07:41.830 "state": "online", 00:07:41.830 "raid_level": "concat", 00:07:41.830 "superblock": true, 00:07:41.830 "num_base_bdevs": 2, 00:07:41.830 "num_base_bdevs_discovered": 2, 00:07:41.830 "num_base_bdevs_operational": 2, 00:07:41.830 "base_bdevs_list": [ 00:07:41.830 { 00:07:41.830 "name": "pt1", 00:07:41.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:41.830 "is_configured": true, 00:07:41.830 "data_offset": 2048, 00:07:41.830 "data_size": 63488 00:07:41.830 }, 00:07:41.830 { 00:07:41.830 "name": "pt2", 00:07:41.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:41.830 "is_configured": true, 00:07:41.830 "data_offset": 2048, 00:07:41.830 "data_size": 63488 00:07:41.830 } 00:07:41.830 ] 00:07:41.830 } 00:07:41.830 } 00:07:41.830 }' 00:07:41.830 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:41.830 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:41.830 pt2' 00:07:41.830 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 [2024-09-29 21:39:00.926930] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ed827eab-0fdd-4b69-83a6-235d611971fc 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ed827eab-0fdd-4b69-83a6-235d611971fc ']' 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 [2024-09-29 21:39:00.970642] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:42.090 [2024-09-29 21:39:00.970707] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:42.090 [2024-09-29 21:39:00.970810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.090 [2024-09-29 21:39:00.970867] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.090 [2024-09-29 21:39:00.970917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 21:39:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.090 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.349 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.349 [2024-09-29 21:39:01.094430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:42.349 [2024-09-29 21:39:01.096412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:42.349 [2024-09-29 21:39:01.096476] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:42.349 [2024-09-29 21:39:01.096521] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:42.349 [2024-09-29 21:39:01.096534] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:42.350 [2024-09-29 21:39:01.096544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:42.350 request: 00:07:42.350 { 00:07:42.350 "name": "raid_bdev1", 00:07:42.350 "raid_level": "concat", 00:07:42.350 "base_bdevs": [ 00:07:42.350 "malloc1", 00:07:42.350 "malloc2" 00:07:42.350 ], 00:07:42.350 "strip_size_kb": 64, 00:07:42.350 "superblock": false, 00:07:42.350 "method": "bdev_raid_create", 00:07:42.350 "req_id": 1 00:07:42.350 } 00:07:42.350 Got JSON-RPC error response 00:07:42.350 response: 00:07:42.350 { 00:07:42.350 "code": -17, 00:07:42.350 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:42.350 } 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.350 [2024-09-29 21:39:01.158303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:42.350 [2024-09-29 21:39:01.158420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.350 [2024-09-29 21:39:01.158456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:42.350 [2024-09-29 21:39:01.158487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.350 [2024-09-29 21:39:01.160861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.350 [2024-09-29 21:39:01.160933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:42.350 [2024-09-29 21:39:01.161020] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:42.350 [2024-09-29 21:39:01.161113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:42.350 pt1 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.350 "name": "raid_bdev1", 00:07:42.350 "uuid": "ed827eab-0fdd-4b69-83a6-235d611971fc", 00:07:42.350 "strip_size_kb": 64, 00:07:42.350 "state": "configuring", 00:07:42.350 "raid_level": "concat", 00:07:42.350 "superblock": true, 00:07:42.350 "num_base_bdevs": 2, 00:07:42.350 "num_base_bdevs_discovered": 1, 00:07:42.350 "num_base_bdevs_operational": 2, 00:07:42.350 "base_bdevs_list": [ 00:07:42.350 { 00:07:42.350 "name": "pt1", 00:07:42.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:42.350 "is_configured": true, 00:07:42.350 "data_offset": 2048, 00:07:42.350 "data_size": 63488 00:07:42.350 }, 00:07:42.350 { 00:07:42.350 "name": null, 00:07:42.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:42.350 "is_configured": false, 00:07:42.350 "data_offset": 2048, 00:07:42.350 "data_size": 63488 00:07:42.350 } 00:07:42.350 ] 00:07:42.350 }' 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.350 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.610 [2024-09-29 21:39:01.533653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:42.610 [2024-09-29 21:39:01.533712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.610 [2024-09-29 21:39:01.533730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:42.610 [2024-09-29 21:39:01.533741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.610 [2024-09-29 21:39:01.534191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.610 [2024-09-29 21:39:01.534219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:42.610 [2024-09-29 21:39:01.534303] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:42.610 [2024-09-29 21:39:01.534326] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:42.610 [2024-09-29 21:39:01.534434] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.610 [2024-09-29 21:39:01.534445] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.610 [2024-09-29 21:39:01.534680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:42.610 [2024-09-29 21:39:01.534822] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.610 [2024-09-29 21:39:01.534839] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:42.610 [2024-09-29 21:39:01.534962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.610 pt2 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.610 "name": "raid_bdev1", 00:07:42.610 "uuid": "ed827eab-0fdd-4b69-83a6-235d611971fc", 00:07:42.610 "strip_size_kb": 64, 00:07:42.610 "state": "online", 00:07:42.610 "raid_level": "concat", 00:07:42.610 "superblock": true, 00:07:42.610 "num_base_bdevs": 2, 00:07:42.610 "num_base_bdevs_discovered": 2, 00:07:42.610 "num_base_bdevs_operational": 2, 00:07:42.610 "base_bdevs_list": [ 00:07:42.610 { 00:07:42.610 "name": "pt1", 00:07:42.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:42.610 "is_configured": true, 00:07:42.610 "data_offset": 2048, 00:07:42.610 "data_size": 63488 00:07:42.610 }, 00:07:42.610 { 00:07:42.610 "name": "pt2", 00:07:42.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:42.610 "is_configured": true, 00:07:42.610 "data_offset": 2048, 00:07:42.610 "data_size": 63488 00:07:42.610 } 00:07:42.610 ] 00:07:42.610 }' 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.610 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.179 [2024-09-29 21:39:01.977165] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.179 21:39:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.179 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.179 "name": "raid_bdev1", 00:07:43.179 "aliases": [ 00:07:43.179 "ed827eab-0fdd-4b69-83a6-235d611971fc" 00:07:43.179 ], 00:07:43.179 "product_name": "Raid Volume", 00:07:43.179 "block_size": 512, 00:07:43.179 "num_blocks": 126976, 00:07:43.179 "uuid": "ed827eab-0fdd-4b69-83a6-235d611971fc", 00:07:43.179 "assigned_rate_limits": { 00:07:43.179 "rw_ios_per_sec": 0, 00:07:43.179 "rw_mbytes_per_sec": 0, 00:07:43.179 "r_mbytes_per_sec": 0, 00:07:43.179 "w_mbytes_per_sec": 0 00:07:43.179 }, 00:07:43.179 "claimed": false, 00:07:43.179 "zoned": false, 00:07:43.179 "supported_io_types": { 00:07:43.179 "read": true, 00:07:43.179 "write": true, 00:07:43.179 "unmap": true, 00:07:43.179 "flush": true, 00:07:43.179 "reset": true, 00:07:43.179 "nvme_admin": false, 00:07:43.179 "nvme_io": false, 00:07:43.179 "nvme_io_md": false, 00:07:43.179 "write_zeroes": true, 00:07:43.179 "zcopy": false, 00:07:43.179 "get_zone_info": false, 00:07:43.179 "zone_management": false, 00:07:43.179 "zone_append": false, 00:07:43.179 "compare": false, 00:07:43.179 "compare_and_write": false, 00:07:43.179 "abort": false, 00:07:43.179 "seek_hole": false, 00:07:43.179 "seek_data": false, 00:07:43.179 "copy": false, 00:07:43.179 "nvme_iov_md": false 00:07:43.179 }, 00:07:43.179 "memory_domains": [ 00:07:43.179 { 00:07:43.179 "dma_device_id": "system", 00:07:43.179 "dma_device_type": 1 00:07:43.179 }, 00:07:43.179 { 00:07:43.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.179 "dma_device_type": 2 00:07:43.179 }, 00:07:43.179 { 00:07:43.179 "dma_device_id": "system", 00:07:43.179 "dma_device_type": 1 00:07:43.179 }, 00:07:43.179 { 00:07:43.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.179 "dma_device_type": 2 00:07:43.179 } 00:07:43.179 ], 00:07:43.179 "driver_specific": { 00:07:43.179 "raid": { 00:07:43.179 "uuid": "ed827eab-0fdd-4b69-83a6-235d611971fc", 00:07:43.179 "strip_size_kb": 64, 00:07:43.179 "state": "online", 00:07:43.179 "raid_level": "concat", 00:07:43.179 "superblock": true, 00:07:43.179 "num_base_bdevs": 2, 00:07:43.179 "num_base_bdevs_discovered": 2, 00:07:43.179 "num_base_bdevs_operational": 2, 00:07:43.179 "base_bdevs_list": [ 00:07:43.179 { 00:07:43.179 "name": "pt1", 00:07:43.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:43.179 "is_configured": true, 00:07:43.179 "data_offset": 2048, 00:07:43.179 "data_size": 63488 00:07:43.179 }, 00:07:43.179 { 00:07:43.179 "name": "pt2", 00:07:43.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:43.179 "is_configured": true, 00:07:43.179 "data_offset": 2048, 00:07:43.179 "data_size": 63488 00:07:43.179 } 00:07:43.179 ] 00:07:43.179 } 00:07:43.179 } 00:07:43.179 }' 00:07:43.179 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.179 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:43.179 pt2' 00:07:43.179 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.179 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.179 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.179 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:43.179 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.180 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.439 [2024-09-29 21:39:02.172786] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ed827eab-0fdd-4b69-83a6-235d611971fc '!=' ed827eab-0fdd-4b69-83a6-235d611971fc ']' 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62260 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62260 ']' 00:07:43.439 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62260 00:07:43.440 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:43.440 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.440 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62260 00:07:43.440 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.440 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.440 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62260' 00:07:43.440 killing process with pid 62260 00:07:43.440 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62260 00:07:43.440 [2024-09-29 21:39:02.260478] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.440 [2024-09-29 21:39:02.260605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.440 [2024-09-29 21:39:02.260676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62260 00:07:43.440 ee all in destruct 00:07:43.440 [2024-09-29 21:39:02.260786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:43.766 [2024-09-29 21:39:02.479932] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.164 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:45.164 ************************************ 00:07:45.164 END TEST raid_superblock_test 00:07:45.164 ************************************ 00:07:45.164 00:07:45.164 real 0m4.589s 00:07:45.164 user 0m6.068s 00:07:45.164 sys 0m0.883s 00:07:45.164 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.164 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.164 21:39:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:45.164 21:39:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:45.164 21:39:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.164 21:39:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.164 ************************************ 00:07:45.164 START TEST raid_read_error_test 00:07:45.164 ************************************ 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TtWALXZW4Y 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62472 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62472 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62472 ']' 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.164 21:39:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.164 [2024-09-29 21:39:03.986789] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:45.165 [2024-09-29 21:39:03.986897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62472 ] 00:07:45.423 [2024-09-29 21:39:04.148772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.423 [2024-09-29 21:39:04.391059] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.683 [2024-09-29 21:39:04.618813] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.683 [2024-09-29 21:39:04.618846] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.943 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.943 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:45.943 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.944 BaseBdev1_malloc 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.944 true 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.944 [2024-09-29 21:39:04.865160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:45.944 [2024-09-29 21:39:04.865226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.944 [2024-09-29 21:39:04.865244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:45.944 [2024-09-29 21:39:04.865256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.944 [2024-09-29 21:39:04.867567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.944 [2024-09-29 21:39:04.867711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:45.944 BaseBdev1 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.944 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.204 BaseBdev2_malloc 00:07:46.204 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.204 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:46.204 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.204 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.204 true 00:07:46.204 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.204 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:46.204 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.204 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.204 [2024-09-29 21:39:04.963145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:46.204 [2024-09-29 21:39:04.963206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.204 [2024-09-29 21:39:04.963223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:46.204 [2024-09-29 21:39:04.963235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.204 [2024-09-29 21:39:04.965564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.205 [2024-09-29 21:39:04.965616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:46.205 BaseBdev2 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.205 [2024-09-29 21:39:04.975204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.205 [2024-09-29 21:39:04.977283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.205 [2024-09-29 21:39:04.977482] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:46.205 [2024-09-29 21:39:04.977497] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.205 [2024-09-29 21:39:04.977736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.205 [2024-09-29 21:39:04.977910] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:46.205 [2024-09-29 21:39:04.977920] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:46.205 [2024-09-29 21:39:04.978071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.205 21:39:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.205 21:39:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.205 21:39:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.205 "name": "raid_bdev1", 00:07:46.205 "uuid": "6b74f662-220e-43c0-8605-2ec27a170a7e", 00:07:46.205 "strip_size_kb": 64, 00:07:46.205 "state": "online", 00:07:46.205 "raid_level": "concat", 00:07:46.205 "superblock": true, 00:07:46.205 "num_base_bdevs": 2, 00:07:46.205 "num_base_bdevs_discovered": 2, 00:07:46.205 "num_base_bdevs_operational": 2, 00:07:46.205 "base_bdevs_list": [ 00:07:46.205 { 00:07:46.205 "name": "BaseBdev1", 00:07:46.205 "uuid": "e31106a7-65d4-5e5b-a3df-f397f1124ba9", 00:07:46.205 "is_configured": true, 00:07:46.205 "data_offset": 2048, 00:07:46.205 "data_size": 63488 00:07:46.205 }, 00:07:46.205 { 00:07:46.205 "name": "BaseBdev2", 00:07:46.205 "uuid": "280d63a3-8533-543f-bf04-8fe67d1c0ce3", 00:07:46.205 "is_configured": true, 00:07:46.205 "data_offset": 2048, 00:07:46.205 "data_size": 63488 00:07:46.205 } 00:07:46.205 ] 00:07:46.205 }' 00:07:46.205 21:39:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.205 21:39:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.464 21:39:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:46.464 21:39:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:46.724 [2024-09-29 21:39:05.475575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.664 "name": "raid_bdev1", 00:07:47.664 "uuid": "6b74f662-220e-43c0-8605-2ec27a170a7e", 00:07:47.664 "strip_size_kb": 64, 00:07:47.664 "state": "online", 00:07:47.664 "raid_level": "concat", 00:07:47.664 "superblock": true, 00:07:47.664 "num_base_bdevs": 2, 00:07:47.664 "num_base_bdevs_discovered": 2, 00:07:47.664 "num_base_bdevs_operational": 2, 00:07:47.664 "base_bdevs_list": [ 00:07:47.664 { 00:07:47.664 "name": "BaseBdev1", 00:07:47.664 "uuid": "e31106a7-65d4-5e5b-a3df-f397f1124ba9", 00:07:47.664 "is_configured": true, 00:07:47.664 "data_offset": 2048, 00:07:47.664 "data_size": 63488 00:07:47.664 }, 00:07:47.664 { 00:07:47.664 "name": "BaseBdev2", 00:07:47.664 "uuid": "280d63a3-8533-543f-bf04-8fe67d1c0ce3", 00:07:47.664 "is_configured": true, 00:07:47.664 "data_offset": 2048, 00:07:47.664 "data_size": 63488 00:07:47.664 } 00:07:47.664 ] 00:07:47.664 }' 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.664 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.924 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:47.924 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.924 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.924 [2024-09-29 21:39:06.819751] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.924 [2024-09-29 21:39:06.819894] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.924 [2024-09-29 21:39:06.822542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.924 [2024-09-29 21:39:06.822651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.924 [2024-09-29 21:39:06.822706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.924 [2024-09-29 21:39:06.822751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:47.924 { 00:07:47.924 "results": [ 00:07:47.924 { 00:07:47.924 "job": "raid_bdev1", 00:07:47.924 "core_mask": "0x1", 00:07:47.924 "workload": "randrw", 00:07:47.924 "percentage": 50, 00:07:47.924 "status": "finished", 00:07:47.924 "queue_depth": 1, 00:07:47.924 "io_size": 131072, 00:07:47.924 "runtime": 1.344925, 00:07:47.924 "iops": 15284.86718590256, 00:07:47.924 "mibps": 1910.60839823782, 00:07:47.924 "io_failed": 1, 00:07:47.924 "io_timeout": 0, 00:07:47.924 "avg_latency_us": 91.76917571799203, 00:07:47.924 "min_latency_us": 24.370305676855896, 00:07:47.924 "max_latency_us": 1387.989519650655 00:07:47.924 } 00:07:47.924 ], 00:07:47.924 "core_count": 1 00:07:47.924 } 00:07:47.924 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.924 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62472 00:07:47.924 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62472 ']' 00:07:47.924 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62472 00:07:47.924 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:47.924 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.925 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62472 00:07:47.925 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.925 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.925 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62472' 00:07:47.925 killing process with pid 62472 00:07:47.925 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62472 00:07:47.925 [2024-09-29 21:39:06.871693] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.925 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62472 00:07:48.184 [2024-09-29 21:39:07.011527] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TtWALXZW4Y 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:49.566 00:07:49.566 real 0m4.518s 00:07:49.566 user 0m5.131s 00:07:49.566 sys 0m0.691s 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.566 ************************************ 00:07:49.566 END TEST raid_read_error_test 00:07:49.566 ************************************ 00:07:49.566 21:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.566 21:39:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:49.566 21:39:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:49.566 21:39:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.566 21:39:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.566 ************************************ 00:07:49.566 START TEST raid_write_error_test 00:07:49.566 ************************************ 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mEVKn1apxn 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62612 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62612 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62612 ']' 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.566 21:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.567 21:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.567 21:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.567 21:39:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.826 [2024-09-29 21:39:08.587507] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:49.826 [2024-09-29 21:39:08.587631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62612 ] 00:07:49.826 [2024-09-29 21:39:08.757455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.086 [2024-09-29 21:39:09.003613] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.345 [2024-09-29 21:39:09.240018] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.345 [2024-09-29 21:39:09.240062] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.604 BaseBdev1_malloc 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.604 true 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.604 [2024-09-29 21:39:09.472707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:50.604 [2024-09-29 21:39:09.472776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.604 [2024-09-29 21:39:09.472792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:50.604 [2024-09-29 21:39:09.472803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.604 [2024-09-29 21:39:09.475141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.604 [2024-09-29 21:39:09.475270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:50.604 BaseBdev1 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.604 BaseBdev2_malloc 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.604 true 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.604 [2024-09-29 21:39:09.572756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:50.604 [2024-09-29 21:39:09.572812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.604 [2024-09-29 21:39:09.572828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:50.604 [2024-09-29 21:39:09.572839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.604 [2024-09-29 21:39:09.575159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.604 [2024-09-29 21:39:09.575196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:50.604 BaseBdev2 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.604 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.604 [2024-09-29 21:39:09.584827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.604 [2024-09-29 21:39:09.586929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.604 [2024-09-29 21:39:09.587171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.604 [2024-09-29 21:39:09.587188] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.604 [2024-09-29 21:39:09.587415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:50.604 [2024-09-29 21:39:09.587580] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.605 [2024-09-29 21:39:09.587597] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.605 [2024-09-29 21:39:09.587743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.865 "name": "raid_bdev1", 00:07:50.865 "uuid": "f4f8f322-83d5-4340-b927-050dda6d4e99", 00:07:50.865 "strip_size_kb": 64, 00:07:50.865 "state": "online", 00:07:50.865 "raid_level": "concat", 00:07:50.865 "superblock": true, 00:07:50.865 "num_base_bdevs": 2, 00:07:50.865 "num_base_bdevs_discovered": 2, 00:07:50.865 "num_base_bdevs_operational": 2, 00:07:50.865 "base_bdevs_list": [ 00:07:50.865 { 00:07:50.865 "name": "BaseBdev1", 00:07:50.865 "uuid": "f79d7955-5f04-5179-94b2-b076327705f6", 00:07:50.865 "is_configured": true, 00:07:50.865 "data_offset": 2048, 00:07:50.865 "data_size": 63488 00:07:50.865 }, 00:07:50.865 { 00:07:50.865 "name": "BaseBdev2", 00:07:50.865 "uuid": "ca908670-3c9a-5c27-9746-75ede421690b", 00:07:50.865 "is_configured": true, 00:07:50.865 "data_offset": 2048, 00:07:50.865 "data_size": 63488 00:07:50.865 } 00:07:50.865 ] 00:07:50.865 }' 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.865 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.125 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:51.125 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:51.385 [2024-09-29 21:39:10.113385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.326 "name": "raid_bdev1", 00:07:52.326 "uuid": "f4f8f322-83d5-4340-b927-050dda6d4e99", 00:07:52.326 "strip_size_kb": 64, 00:07:52.326 "state": "online", 00:07:52.326 "raid_level": "concat", 00:07:52.326 "superblock": true, 00:07:52.326 "num_base_bdevs": 2, 00:07:52.326 "num_base_bdevs_discovered": 2, 00:07:52.326 "num_base_bdevs_operational": 2, 00:07:52.326 "base_bdevs_list": [ 00:07:52.326 { 00:07:52.326 "name": "BaseBdev1", 00:07:52.326 "uuid": "f79d7955-5f04-5179-94b2-b076327705f6", 00:07:52.326 "is_configured": true, 00:07:52.326 "data_offset": 2048, 00:07:52.326 "data_size": 63488 00:07:52.326 }, 00:07:52.326 { 00:07:52.326 "name": "BaseBdev2", 00:07:52.326 "uuid": "ca908670-3c9a-5c27-9746-75ede421690b", 00:07:52.326 "is_configured": true, 00:07:52.326 "data_offset": 2048, 00:07:52.326 "data_size": 63488 00:07:52.326 } 00:07:52.326 ] 00:07:52.326 }' 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.326 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.586 [2024-09-29 21:39:11.449525] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.586 [2024-09-29 21:39:11.449669] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.586 [2024-09-29 21:39:11.452271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.586 [2024-09-29 21:39:11.452391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.586 [2024-09-29 21:39:11.452446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.586 [2024-09-29 21:39:11.452493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:52.586 { 00:07:52.586 "results": [ 00:07:52.586 { 00:07:52.586 "job": "raid_bdev1", 00:07:52.586 "core_mask": "0x1", 00:07:52.586 "workload": "randrw", 00:07:52.586 "percentage": 50, 00:07:52.586 "status": "finished", 00:07:52.586 "queue_depth": 1, 00:07:52.586 "io_size": 131072, 00:07:52.586 "runtime": 1.336813, 00:07:52.586 "iops": 15228.756752066294, 00:07:52.586 "mibps": 1903.5945940082868, 00:07:52.586 "io_failed": 1, 00:07:52.586 "io_timeout": 0, 00:07:52.586 "avg_latency_us": 92.20822103504109, 00:07:52.586 "min_latency_us": 24.370305676855896, 00:07:52.586 "max_latency_us": 1366.5257641921398 00:07:52.586 } 00:07:52.586 ], 00:07:52.586 "core_count": 1 00:07:52.586 } 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62612 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62612 ']' 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62612 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62612 00:07:52.586 killing process with pid 62612 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62612' 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62612 00:07:52.586 [2024-09-29 21:39:11.490850] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.586 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62612 00:07:52.846 [2024-09-29 21:39:11.641126] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.226 21:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:54.226 21:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mEVKn1apxn 00:07:54.226 21:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:54.226 21:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:54.226 21:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:54.226 ************************************ 00:07:54.226 END TEST raid_write_error_test 00:07:54.226 ************************************ 00:07:54.226 21:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.226 21:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.227 21:39:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:54.227 00:07:54.227 real 0m4.550s 00:07:54.227 user 0m5.246s 00:07:54.227 sys 0m0.654s 00:07:54.227 21:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.227 21:39:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.227 21:39:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:54.227 21:39:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:54.227 21:39:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:54.227 21:39:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.227 21:39:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.227 ************************************ 00:07:54.227 START TEST raid_state_function_test 00:07:54.227 ************************************ 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62757 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62757' 00:07:54.227 Process raid pid: 62757 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62757 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62757 ']' 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.227 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.227 [2024-09-29 21:39:13.192757] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:54.227 [2024-09-29 21:39:13.192876] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.486 [2024-09-29 21:39:13.363668] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.746 [2024-09-29 21:39:13.618163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.005 [2024-09-29 21:39:13.854401] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.005 [2024-09-29 21:39:13.854439] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.264 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.264 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:55.264 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.264 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.264 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.264 [2024-09-29 21:39:14.017526] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.264 [2024-09-29 21:39:14.017591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.264 [2024-09-29 21:39:14.017601] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.264 [2024-09-29 21:39:14.017611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.264 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.265 "name": "Existed_Raid", 00:07:55.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.265 "strip_size_kb": 0, 00:07:55.265 "state": "configuring", 00:07:55.265 "raid_level": "raid1", 00:07:55.265 "superblock": false, 00:07:55.265 "num_base_bdevs": 2, 00:07:55.265 "num_base_bdevs_discovered": 0, 00:07:55.265 "num_base_bdevs_operational": 2, 00:07:55.265 "base_bdevs_list": [ 00:07:55.265 { 00:07:55.265 "name": "BaseBdev1", 00:07:55.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.265 "is_configured": false, 00:07:55.265 "data_offset": 0, 00:07:55.265 "data_size": 0 00:07:55.265 }, 00:07:55.265 { 00:07:55.265 "name": "BaseBdev2", 00:07:55.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.265 "is_configured": false, 00:07:55.265 "data_offset": 0, 00:07:55.265 "data_size": 0 00:07:55.265 } 00:07:55.265 ] 00:07:55.265 }' 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.265 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.525 [2024-09-29 21:39:14.428756] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.525 [2024-09-29 21:39:14.428863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.525 [2024-09-29 21:39:14.436761] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.525 [2024-09-29 21:39:14.436844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.525 [2024-09-29 21:39:14.436870] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.525 [2024-09-29 21:39:14.436896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.525 [2024-09-29 21:39:14.496821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.525 BaseBdev1 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.525 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.785 [ 00:07:55.785 { 00:07:55.785 "name": "BaseBdev1", 00:07:55.785 "aliases": [ 00:07:55.785 "f314a731-3c91-49c0-88f2-3815014581b9" 00:07:55.785 ], 00:07:55.785 "product_name": "Malloc disk", 00:07:55.785 "block_size": 512, 00:07:55.785 "num_blocks": 65536, 00:07:55.785 "uuid": "f314a731-3c91-49c0-88f2-3815014581b9", 00:07:55.785 "assigned_rate_limits": { 00:07:55.785 "rw_ios_per_sec": 0, 00:07:55.785 "rw_mbytes_per_sec": 0, 00:07:55.785 "r_mbytes_per_sec": 0, 00:07:55.785 "w_mbytes_per_sec": 0 00:07:55.785 }, 00:07:55.785 "claimed": true, 00:07:55.785 "claim_type": "exclusive_write", 00:07:55.785 "zoned": false, 00:07:55.785 "supported_io_types": { 00:07:55.785 "read": true, 00:07:55.785 "write": true, 00:07:55.785 "unmap": true, 00:07:55.785 "flush": true, 00:07:55.785 "reset": true, 00:07:55.785 "nvme_admin": false, 00:07:55.785 "nvme_io": false, 00:07:55.785 "nvme_io_md": false, 00:07:55.785 "write_zeroes": true, 00:07:55.785 "zcopy": true, 00:07:55.785 "get_zone_info": false, 00:07:55.785 "zone_management": false, 00:07:55.785 "zone_append": false, 00:07:55.785 "compare": false, 00:07:55.785 "compare_and_write": false, 00:07:55.785 "abort": true, 00:07:55.785 "seek_hole": false, 00:07:55.785 "seek_data": false, 00:07:55.785 "copy": true, 00:07:55.785 "nvme_iov_md": false 00:07:55.785 }, 00:07:55.785 "memory_domains": [ 00:07:55.785 { 00:07:55.785 "dma_device_id": "system", 00:07:55.785 "dma_device_type": 1 00:07:55.785 }, 00:07:55.785 { 00:07:55.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.785 "dma_device_type": 2 00:07:55.785 } 00:07:55.785 ], 00:07:55.785 "driver_specific": {} 00:07:55.785 } 00:07:55.785 ] 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.785 "name": "Existed_Raid", 00:07:55.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.785 "strip_size_kb": 0, 00:07:55.785 "state": "configuring", 00:07:55.785 "raid_level": "raid1", 00:07:55.785 "superblock": false, 00:07:55.785 "num_base_bdevs": 2, 00:07:55.785 "num_base_bdevs_discovered": 1, 00:07:55.785 "num_base_bdevs_operational": 2, 00:07:55.785 "base_bdevs_list": [ 00:07:55.785 { 00:07:55.785 "name": "BaseBdev1", 00:07:55.785 "uuid": "f314a731-3c91-49c0-88f2-3815014581b9", 00:07:55.785 "is_configured": true, 00:07:55.785 "data_offset": 0, 00:07:55.785 "data_size": 65536 00:07:55.785 }, 00:07:55.785 { 00:07:55.785 "name": "BaseBdev2", 00:07:55.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.785 "is_configured": false, 00:07:55.785 "data_offset": 0, 00:07:55.785 "data_size": 0 00:07:55.785 } 00:07:55.785 ] 00:07:55.785 }' 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.785 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.045 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.046 [2024-09-29 21:39:14.976098] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.046 [2024-09-29 21:39:14.976189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.046 [2024-09-29 21:39:14.984101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.046 [2024-09-29 21:39:14.986215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.046 [2024-09-29 21:39:14.986305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.046 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.046 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.046 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.046 "name": "Existed_Raid", 00:07:56.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.046 "strip_size_kb": 0, 00:07:56.046 "state": "configuring", 00:07:56.046 "raid_level": "raid1", 00:07:56.046 "superblock": false, 00:07:56.046 "num_base_bdevs": 2, 00:07:56.046 "num_base_bdevs_discovered": 1, 00:07:56.046 "num_base_bdevs_operational": 2, 00:07:56.046 "base_bdevs_list": [ 00:07:56.046 { 00:07:56.046 "name": "BaseBdev1", 00:07:56.046 "uuid": "f314a731-3c91-49c0-88f2-3815014581b9", 00:07:56.046 "is_configured": true, 00:07:56.046 "data_offset": 0, 00:07:56.046 "data_size": 65536 00:07:56.046 }, 00:07:56.046 { 00:07:56.046 "name": "BaseBdev2", 00:07:56.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.046 "is_configured": false, 00:07:56.046 "data_offset": 0, 00:07:56.046 "data_size": 0 00:07:56.046 } 00:07:56.046 ] 00:07:56.046 }' 00:07:56.046 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.046 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.615 [2024-09-29 21:39:15.413869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.615 [2024-09-29 21:39:15.413924] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.615 [2024-09-29 21:39:15.413936] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:56.615 [2024-09-29 21:39:15.414409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:56.615 [2024-09-29 21:39:15.414595] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.615 [2024-09-29 21:39:15.414609] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:56.615 [2024-09-29 21:39:15.414902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.615 BaseBdev2 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.615 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.615 [ 00:07:56.615 { 00:07:56.615 "name": "BaseBdev2", 00:07:56.615 "aliases": [ 00:07:56.615 "7921225c-c548-4f41-a9ca-723960d60877" 00:07:56.615 ], 00:07:56.615 "product_name": "Malloc disk", 00:07:56.615 "block_size": 512, 00:07:56.615 "num_blocks": 65536, 00:07:56.615 "uuid": "7921225c-c548-4f41-a9ca-723960d60877", 00:07:56.615 "assigned_rate_limits": { 00:07:56.615 "rw_ios_per_sec": 0, 00:07:56.615 "rw_mbytes_per_sec": 0, 00:07:56.615 "r_mbytes_per_sec": 0, 00:07:56.615 "w_mbytes_per_sec": 0 00:07:56.615 }, 00:07:56.615 "claimed": true, 00:07:56.615 "claim_type": "exclusive_write", 00:07:56.615 "zoned": false, 00:07:56.615 "supported_io_types": { 00:07:56.615 "read": true, 00:07:56.615 "write": true, 00:07:56.615 "unmap": true, 00:07:56.615 "flush": true, 00:07:56.615 "reset": true, 00:07:56.615 "nvme_admin": false, 00:07:56.615 "nvme_io": false, 00:07:56.615 "nvme_io_md": false, 00:07:56.615 "write_zeroes": true, 00:07:56.615 "zcopy": true, 00:07:56.615 "get_zone_info": false, 00:07:56.615 "zone_management": false, 00:07:56.615 "zone_append": false, 00:07:56.615 "compare": false, 00:07:56.615 "compare_and_write": false, 00:07:56.615 "abort": true, 00:07:56.615 "seek_hole": false, 00:07:56.615 "seek_data": false, 00:07:56.615 "copy": true, 00:07:56.615 "nvme_iov_md": false 00:07:56.615 }, 00:07:56.615 "memory_domains": [ 00:07:56.616 { 00:07:56.616 "dma_device_id": "system", 00:07:56.616 "dma_device_type": 1 00:07:56.616 }, 00:07:56.616 { 00:07:56.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.616 "dma_device_type": 2 00:07:56.616 } 00:07:56.616 ], 00:07:56.616 "driver_specific": {} 00:07:56.616 } 00:07:56.616 ] 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.616 "name": "Existed_Raid", 00:07:56.616 "uuid": "3c88ab56-14e2-4e23-b8fc-1e704832b79a", 00:07:56.616 "strip_size_kb": 0, 00:07:56.616 "state": "online", 00:07:56.616 "raid_level": "raid1", 00:07:56.616 "superblock": false, 00:07:56.616 "num_base_bdevs": 2, 00:07:56.616 "num_base_bdevs_discovered": 2, 00:07:56.616 "num_base_bdevs_operational": 2, 00:07:56.616 "base_bdevs_list": [ 00:07:56.616 { 00:07:56.616 "name": "BaseBdev1", 00:07:56.616 "uuid": "f314a731-3c91-49c0-88f2-3815014581b9", 00:07:56.616 "is_configured": true, 00:07:56.616 "data_offset": 0, 00:07:56.616 "data_size": 65536 00:07:56.616 }, 00:07:56.616 { 00:07:56.616 "name": "BaseBdev2", 00:07:56.616 "uuid": "7921225c-c548-4f41-a9ca-723960d60877", 00:07:56.616 "is_configured": true, 00:07:56.616 "data_offset": 0, 00:07:56.616 "data_size": 65536 00:07:56.616 } 00:07:56.616 ] 00:07:56.616 }' 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.616 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.874 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.874 [2024-09-29 21:39:15.845402] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.134 "name": "Existed_Raid", 00:07:57.134 "aliases": [ 00:07:57.134 "3c88ab56-14e2-4e23-b8fc-1e704832b79a" 00:07:57.134 ], 00:07:57.134 "product_name": "Raid Volume", 00:07:57.134 "block_size": 512, 00:07:57.134 "num_blocks": 65536, 00:07:57.134 "uuid": "3c88ab56-14e2-4e23-b8fc-1e704832b79a", 00:07:57.134 "assigned_rate_limits": { 00:07:57.134 "rw_ios_per_sec": 0, 00:07:57.134 "rw_mbytes_per_sec": 0, 00:07:57.134 "r_mbytes_per_sec": 0, 00:07:57.134 "w_mbytes_per_sec": 0 00:07:57.134 }, 00:07:57.134 "claimed": false, 00:07:57.134 "zoned": false, 00:07:57.134 "supported_io_types": { 00:07:57.134 "read": true, 00:07:57.134 "write": true, 00:07:57.134 "unmap": false, 00:07:57.134 "flush": false, 00:07:57.134 "reset": true, 00:07:57.134 "nvme_admin": false, 00:07:57.134 "nvme_io": false, 00:07:57.134 "nvme_io_md": false, 00:07:57.134 "write_zeroes": true, 00:07:57.134 "zcopy": false, 00:07:57.134 "get_zone_info": false, 00:07:57.134 "zone_management": false, 00:07:57.134 "zone_append": false, 00:07:57.134 "compare": false, 00:07:57.134 "compare_and_write": false, 00:07:57.134 "abort": false, 00:07:57.134 "seek_hole": false, 00:07:57.134 "seek_data": false, 00:07:57.134 "copy": false, 00:07:57.134 "nvme_iov_md": false 00:07:57.134 }, 00:07:57.134 "memory_domains": [ 00:07:57.134 { 00:07:57.134 "dma_device_id": "system", 00:07:57.134 "dma_device_type": 1 00:07:57.134 }, 00:07:57.134 { 00:07:57.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.134 "dma_device_type": 2 00:07:57.134 }, 00:07:57.134 { 00:07:57.134 "dma_device_id": "system", 00:07:57.134 "dma_device_type": 1 00:07:57.134 }, 00:07:57.134 { 00:07:57.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.134 "dma_device_type": 2 00:07:57.134 } 00:07:57.134 ], 00:07:57.134 "driver_specific": { 00:07:57.134 "raid": { 00:07:57.134 "uuid": "3c88ab56-14e2-4e23-b8fc-1e704832b79a", 00:07:57.134 "strip_size_kb": 0, 00:07:57.134 "state": "online", 00:07:57.134 "raid_level": "raid1", 00:07:57.134 "superblock": false, 00:07:57.134 "num_base_bdevs": 2, 00:07:57.134 "num_base_bdevs_discovered": 2, 00:07:57.134 "num_base_bdevs_operational": 2, 00:07:57.134 "base_bdevs_list": [ 00:07:57.134 { 00:07:57.134 "name": "BaseBdev1", 00:07:57.134 "uuid": "f314a731-3c91-49c0-88f2-3815014581b9", 00:07:57.134 "is_configured": true, 00:07:57.134 "data_offset": 0, 00:07:57.134 "data_size": 65536 00:07:57.134 }, 00:07:57.134 { 00:07:57.134 "name": "BaseBdev2", 00:07:57.134 "uuid": "7921225c-c548-4f41-a9ca-723960d60877", 00:07:57.134 "is_configured": true, 00:07:57.134 "data_offset": 0, 00:07:57.134 "data_size": 65536 00:07:57.134 } 00:07:57.134 ] 00:07:57.134 } 00:07:57.134 } 00:07:57.134 }' 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:57.134 BaseBdev2' 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.134 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.134 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.134 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.134 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.134 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:57.134 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.134 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.134 [2024-09-29 21:39:16.048874] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.394 "name": "Existed_Raid", 00:07:57.394 "uuid": "3c88ab56-14e2-4e23-b8fc-1e704832b79a", 00:07:57.394 "strip_size_kb": 0, 00:07:57.394 "state": "online", 00:07:57.394 "raid_level": "raid1", 00:07:57.394 "superblock": false, 00:07:57.394 "num_base_bdevs": 2, 00:07:57.394 "num_base_bdevs_discovered": 1, 00:07:57.394 "num_base_bdevs_operational": 1, 00:07:57.394 "base_bdevs_list": [ 00:07:57.394 { 00:07:57.394 "name": null, 00:07:57.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.394 "is_configured": false, 00:07:57.394 "data_offset": 0, 00:07:57.394 "data_size": 65536 00:07:57.394 }, 00:07:57.394 { 00:07:57.394 "name": "BaseBdev2", 00:07:57.394 "uuid": "7921225c-c548-4f41-a9ca-723960d60877", 00:07:57.394 "is_configured": true, 00:07:57.394 "data_offset": 0, 00:07:57.394 "data_size": 65536 00:07:57.394 } 00:07:57.394 ] 00:07:57.394 }' 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.394 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.654 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.654 [2024-09-29 21:39:16.621368] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:57.654 [2024-09-29 21:39:16.621534] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.914 [2024-09-29 21:39:16.722209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.914 [2024-09-29 21:39:16.722353] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.914 [2024-09-29 21:39:16.722399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62757 00:07:57.914 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62757 ']' 00:07:57.915 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62757 00:07:57.915 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:57.915 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:57.915 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62757 00:07:57.915 killing process with pid 62757 00:07:57.915 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:57.915 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:57.915 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62757' 00:07:57.915 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62757 00:07:57.915 [2024-09-29 21:39:16.805610] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.915 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62757 00:07:57.915 [2024-09-29 21:39:16.822305] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.317 ************************************ 00:07:59.317 END TEST raid_state_function_test 00:07:59.317 ************************************ 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:59.317 00:07:59.317 real 0m5.048s 00:07:59.317 user 0m6.945s 00:07:59.317 sys 0m0.926s 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.317 21:39:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:59.317 21:39:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:59.317 21:39:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.317 21:39:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.317 ************************************ 00:07:59.317 START TEST raid_state_function_test_sb 00:07:59.317 ************************************ 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:59.317 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63009 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63009' 00:07:59.318 Process raid pid: 63009 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63009 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 63009 ']' 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.318 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.593 [2024-09-29 21:39:18.325939] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:59.593 [2024-09-29 21:39:18.326209] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.593 [2024-09-29 21:39:18.497219] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.852 [2024-09-29 21:39:18.744816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.112 [2024-09-29 21:39:18.974618] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.112 [2024-09-29 21:39:18.974651] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.371 [2024-09-29 21:39:19.156290] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.371 [2024-09-29 21:39:19.156359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.371 [2024-09-29 21:39:19.156369] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.371 [2024-09-29 21:39:19.156380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.371 "name": "Existed_Raid", 00:08:00.371 "uuid": "3323ec8d-4a4c-4611-b93d-1daa05e79f3c", 00:08:00.371 "strip_size_kb": 0, 00:08:00.371 "state": "configuring", 00:08:00.371 "raid_level": "raid1", 00:08:00.371 "superblock": true, 00:08:00.371 "num_base_bdevs": 2, 00:08:00.371 "num_base_bdevs_discovered": 0, 00:08:00.371 "num_base_bdevs_operational": 2, 00:08:00.371 "base_bdevs_list": [ 00:08:00.371 { 00:08:00.371 "name": "BaseBdev1", 00:08:00.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.371 "is_configured": false, 00:08:00.371 "data_offset": 0, 00:08:00.371 "data_size": 0 00:08:00.371 }, 00:08:00.371 { 00:08:00.371 "name": "BaseBdev2", 00:08:00.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.371 "is_configured": false, 00:08:00.371 "data_offset": 0, 00:08:00.371 "data_size": 0 00:08:00.371 } 00:08:00.371 ] 00:08:00.371 }' 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.371 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.630 [2024-09-29 21:39:19.559573] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.630 [2024-09-29 21:39:19.559674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.630 [2024-09-29 21:39:19.571572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.630 [2024-09-29 21:39:19.571657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.630 [2024-09-29 21:39:19.571688] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.630 [2024-09-29 21:39:19.571715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.630 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.890 [2024-09-29 21:39:19.660448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.890 BaseBdev1 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.890 [ 00:08:00.890 { 00:08:00.890 "name": "BaseBdev1", 00:08:00.890 "aliases": [ 00:08:00.890 "b0365182-8417-48e8-9f46-68cf137530e0" 00:08:00.890 ], 00:08:00.890 "product_name": "Malloc disk", 00:08:00.890 "block_size": 512, 00:08:00.890 "num_blocks": 65536, 00:08:00.890 "uuid": "b0365182-8417-48e8-9f46-68cf137530e0", 00:08:00.890 "assigned_rate_limits": { 00:08:00.890 "rw_ios_per_sec": 0, 00:08:00.890 "rw_mbytes_per_sec": 0, 00:08:00.890 "r_mbytes_per_sec": 0, 00:08:00.890 "w_mbytes_per_sec": 0 00:08:00.890 }, 00:08:00.890 "claimed": true, 00:08:00.890 "claim_type": "exclusive_write", 00:08:00.890 "zoned": false, 00:08:00.890 "supported_io_types": { 00:08:00.890 "read": true, 00:08:00.890 "write": true, 00:08:00.890 "unmap": true, 00:08:00.890 "flush": true, 00:08:00.890 "reset": true, 00:08:00.890 "nvme_admin": false, 00:08:00.890 "nvme_io": false, 00:08:00.890 "nvme_io_md": false, 00:08:00.890 "write_zeroes": true, 00:08:00.890 "zcopy": true, 00:08:00.890 "get_zone_info": false, 00:08:00.890 "zone_management": false, 00:08:00.890 "zone_append": false, 00:08:00.890 "compare": false, 00:08:00.890 "compare_and_write": false, 00:08:00.890 "abort": true, 00:08:00.890 "seek_hole": false, 00:08:00.890 "seek_data": false, 00:08:00.890 "copy": true, 00:08:00.890 "nvme_iov_md": false 00:08:00.890 }, 00:08:00.890 "memory_domains": [ 00:08:00.890 { 00:08:00.890 "dma_device_id": "system", 00:08:00.890 "dma_device_type": 1 00:08:00.890 }, 00:08:00.890 { 00:08:00.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.890 "dma_device_type": 2 00:08:00.890 } 00:08:00.890 ], 00:08:00.890 "driver_specific": {} 00:08:00.890 } 00:08:00.890 ] 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:00.890 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.891 "name": "Existed_Raid", 00:08:00.891 "uuid": "531ff22a-5d75-4b78-8a58-67e8ee2b6cca", 00:08:00.891 "strip_size_kb": 0, 00:08:00.891 "state": "configuring", 00:08:00.891 "raid_level": "raid1", 00:08:00.891 "superblock": true, 00:08:00.891 "num_base_bdevs": 2, 00:08:00.891 "num_base_bdevs_discovered": 1, 00:08:00.891 "num_base_bdevs_operational": 2, 00:08:00.891 "base_bdevs_list": [ 00:08:00.891 { 00:08:00.891 "name": "BaseBdev1", 00:08:00.891 "uuid": "b0365182-8417-48e8-9f46-68cf137530e0", 00:08:00.891 "is_configured": true, 00:08:00.891 "data_offset": 2048, 00:08:00.891 "data_size": 63488 00:08:00.891 }, 00:08:00.891 { 00:08:00.891 "name": "BaseBdev2", 00:08:00.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.891 "is_configured": false, 00:08:00.891 "data_offset": 0, 00:08:00.891 "data_size": 0 00:08:00.891 } 00:08:00.891 ] 00:08:00.891 }' 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.891 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.151 [2024-09-29 21:39:20.111702] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.151 [2024-09-29 21:39:20.111749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.151 [2024-09-29 21:39:20.123721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.151 [2024-09-29 21:39:20.125875] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.151 [2024-09-29 21:39:20.125919] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.151 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.411 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.411 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.411 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.411 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.411 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.411 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.411 "name": "Existed_Raid", 00:08:01.411 "uuid": "18d8392e-0e0f-4265-84ae-dc7d236f2c06", 00:08:01.411 "strip_size_kb": 0, 00:08:01.411 "state": "configuring", 00:08:01.411 "raid_level": "raid1", 00:08:01.411 "superblock": true, 00:08:01.411 "num_base_bdevs": 2, 00:08:01.411 "num_base_bdevs_discovered": 1, 00:08:01.411 "num_base_bdevs_operational": 2, 00:08:01.411 "base_bdevs_list": [ 00:08:01.411 { 00:08:01.411 "name": "BaseBdev1", 00:08:01.411 "uuid": "b0365182-8417-48e8-9f46-68cf137530e0", 00:08:01.411 "is_configured": true, 00:08:01.411 "data_offset": 2048, 00:08:01.411 "data_size": 63488 00:08:01.411 }, 00:08:01.411 { 00:08:01.411 "name": "BaseBdev2", 00:08:01.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.411 "is_configured": false, 00:08:01.411 "data_offset": 0, 00:08:01.411 "data_size": 0 00:08:01.411 } 00:08:01.411 ] 00:08:01.411 }' 00:08:01.411 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.411 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.671 [2024-09-29 21:39:20.597980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.671 [2024-09-29 21:39:20.598344] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:01.671 [2024-09-29 21:39:20.598402] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:01.671 [2024-09-29 21:39:20.598747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:01.671 BaseBdev2 00:08:01.671 [2024-09-29 21:39:20.599056] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:01.671 [2024-09-29 21:39:20.599075] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:01.671 [2024-09-29 21:39:20.599240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.671 [ 00:08:01.671 { 00:08:01.671 "name": "BaseBdev2", 00:08:01.671 "aliases": [ 00:08:01.671 "5b6da249-972f-4c3b-8fcc-b987ebcc74dc" 00:08:01.671 ], 00:08:01.671 "product_name": "Malloc disk", 00:08:01.671 "block_size": 512, 00:08:01.671 "num_blocks": 65536, 00:08:01.671 "uuid": "5b6da249-972f-4c3b-8fcc-b987ebcc74dc", 00:08:01.671 "assigned_rate_limits": { 00:08:01.671 "rw_ios_per_sec": 0, 00:08:01.671 "rw_mbytes_per_sec": 0, 00:08:01.671 "r_mbytes_per_sec": 0, 00:08:01.671 "w_mbytes_per_sec": 0 00:08:01.671 }, 00:08:01.671 "claimed": true, 00:08:01.671 "claim_type": "exclusive_write", 00:08:01.671 "zoned": false, 00:08:01.671 "supported_io_types": { 00:08:01.671 "read": true, 00:08:01.671 "write": true, 00:08:01.671 "unmap": true, 00:08:01.671 "flush": true, 00:08:01.671 "reset": true, 00:08:01.671 "nvme_admin": false, 00:08:01.671 "nvme_io": false, 00:08:01.671 "nvme_io_md": false, 00:08:01.671 "write_zeroes": true, 00:08:01.671 "zcopy": true, 00:08:01.671 "get_zone_info": false, 00:08:01.671 "zone_management": false, 00:08:01.671 "zone_append": false, 00:08:01.671 "compare": false, 00:08:01.671 "compare_and_write": false, 00:08:01.671 "abort": true, 00:08:01.671 "seek_hole": false, 00:08:01.671 "seek_data": false, 00:08:01.671 "copy": true, 00:08:01.671 "nvme_iov_md": false 00:08:01.671 }, 00:08:01.671 "memory_domains": [ 00:08:01.671 { 00:08:01.671 "dma_device_id": "system", 00:08:01.671 "dma_device_type": 1 00:08:01.671 }, 00:08:01.671 { 00:08:01.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.671 "dma_device_type": 2 00:08:01.671 } 00:08:01.671 ], 00:08:01.671 "driver_specific": {} 00:08:01.671 } 00:08:01.671 ] 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.671 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.930 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.930 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.930 "name": "Existed_Raid", 00:08:01.930 "uuid": "18d8392e-0e0f-4265-84ae-dc7d236f2c06", 00:08:01.930 "strip_size_kb": 0, 00:08:01.930 "state": "online", 00:08:01.930 "raid_level": "raid1", 00:08:01.930 "superblock": true, 00:08:01.930 "num_base_bdevs": 2, 00:08:01.930 "num_base_bdevs_discovered": 2, 00:08:01.930 "num_base_bdevs_operational": 2, 00:08:01.930 "base_bdevs_list": [ 00:08:01.930 { 00:08:01.930 "name": "BaseBdev1", 00:08:01.930 "uuid": "b0365182-8417-48e8-9f46-68cf137530e0", 00:08:01.930 "is_configured": true, 00:08:01.930 "data_offset": 2048, 00:08:01.930 "data_size": 63488 00:08:01.930 }, 00:08:01.930 { 00:08:01.930 "name": "BaseBdev2", 00:08:01.930 "uuid": "5b6da249-972f-4c3b-8fcc-b987ebcc74dc", 00:08:01.930 "is_configured": true, 00:08:01.930 "data_offset": 2048, 00:08:01.930 "data_size": 63488 00:08:01.930 } 00:08:01.930 ] 00:08:01.930 }' 00:08:01.930 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.930 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.190 [2024-09-29 21:39:21.021527] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.190 "name": "Existed_Raid", 00:08:02.190 "aliases": [ 00:08:02.190 "18d8392e-0e0f-4265-84ae-dc7d236f2c06" 00:08:02.190 ], 00:08:02.190 "product_name": "Raid Volume", 00:08:02.190 "block_size": 512, 00:08:02.190 "num_blocks": 63488, 00:08:02.190 "uuid": "18d8392e-0e0f-4265-84ae-dc7d236f2c06", 00:08:02.190 "assigned_rate_limits": { 00:08:02.190 "rw_ios_per_sec": 0, 00:08:02.190 "rw_mbytes_per_sec": 0, 00:08:02.190 "r_mbytes_per_sec": 0, 00:08:02.190 "w_mbytes_per_sec": 0 00:08:02.190 }, 00:08:02.190 "claimed": false, 00:08:02.190 "zoned": false, 00:08:02.190 "supported_io_types": { 00:08:02.190 "read": true, 00:08:02.190 "write": true, 00:08:02.190 "unmap": false, 00:08:02.190 "flush": false, 00:08:02.190 "reset": true, 00:08:02.190 "nvme_admin": false, 00:08:02.190 "nvme_io": false, 00:08:02.190 "nvme_io_md": false, 00:08:02.190 "write_zeroes": true, 00:08:02.190 "zcopy": false, 00:08:02.190 "get_zone_info": false, 00:08:02.190 "zone_management": false, 00:08:02.190 "zone_append": false, 00:08:02.190 "compare": false, 00:08:02.190 "compare_and_write": false, 00:08:02.190 "abort": false, 00:08:02.190 "seek_hole": false, 00:08:02.190 "seek_data": false, 00:08:02.190 "copy": false, 00:08:02.190 "nvme_iov_md": false 00:08:02.190 }, 00:08:02.190 "memory_domains": [ 00:08:02.190 { 00:08:02.190 "dma_device_id": "system", 00:08:02.190 "dma_device_type": 1 00:08:02.190 }, 00:08:02.190 { 00:08:02.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.190 "dma_device_type": 2 00:08:02.190 }, 00:08:02.190 { 00:08:02.190 "dma_device_id": "system", 00:08:02.190 "dma_device_type": 1 00:08:02.190 }, 00:08:02.190 { 00:08:02.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.190 "dma_device_type": 2 00:08:02.190 } 00:08:02.190 ], 00:08:02.190 "driver_specific": { 00:08:02.190 "raid": { 00:08:02.190 "uuid": "18d8392e-0e0f-4265-84ae-dc7d236f2c06", 00:08:02.190 "strip_size_kb": 0, 00:08:02.190 "state": "online", 00:08:02.190 "raid_level": "raid1", 00:08:02.190 "superblock": true, 00:08:02.190 "num_base_bdevs": 2, 00:08:02.190 "num_base_bdevs_discovered": 2, 00:08:02.190 "num_base_bdevs_operational": 2, 00:08:02.190 "base_bdevs_list": [ 00:08:02.190 { 00:08:02.190 "name": "BaseBdev1", 00:08:02.190 "uuid": "b0365182-8417-48e8-9f46-68cf137530e0", 00:08:02.190 "is_configured": true, 00:08:02.190 "data_offset": 2048, 00:08:02.190 "data_size": 63488 00:08:02.190 }, 00:08:02.190 { 00:08:02.190 "name": "BaseBdev2", 00:08:02.190 "uuid": "5b6da249-972f-4c3b-8fcc-b987ebcc74dc", 00:08:02.190 "is_configured": true, 00:08:02.190 "data_offset": 2048, 00:08:02.190 "data_size": 63488 00:08:02.190 } 00:08:02.190 ] 00:08:02.190 } 00:08:02.190 } 00:08:02.190 }' 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:02.190 BaseBdev2' 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.190 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.450 [2024-09-29 21:39:21.237081] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.450 "name": "Existed_Raid", 00:08:02.450 "uuid": "18d8392e-0e0f-4265-84ae-dc7d236f2c06", 00:08:02.450 "strip_size_kb": 0, 00:08:02.450 "state": "online", 00:08:02.450 "raid_level": "raid1", 00:08:02.450 "superblock": true, 00:08:02.450 "num_base_bdevs": 2, 00:08:02.450 "num_base_bdevs_discovered": 1, 00:08:02.450 "num_base_bdevs_operational": 1, 00:08:02.450 "base_bdevs_list": [ 00:08:02.450 { 00:08:02.450 "name": null, 00:08:02.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.450 "is_configured": false, 00:08:02.450 "data_offset": 0, 00:08:02.450 "data_size": 63488 00:08:02.450 }, 00:08:02.450 { 00:08:02.450 "name": "BaseBdev2", 00:08:02.450 "uuid": "5b6da249-972f-4c3b-8fcc-b987ebcc74dc", 00:08:02.450 "is_configured": true, 00:08:02.450 "data_offset": 2048, 00:08:02.450 "data_size": 63488 00:08:02.450 } 00:08:02.450 ] 00:08:02.450 }' 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.450 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.020 [2024-09-29 21:39:21.818063] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.020 [2024-09-29 21:39:21.818187] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.020 [2024-09-29 21:39:21.919455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.020 [2024-09-29 21:39:21.919622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.020 [2024-09-29 21:39:21.919641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63009 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 63009 ']' 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 63009 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.020 21:39:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63009 00:08:03.279 21:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.279 killing process with pid 63009 00:08:03.279 21:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.279 21:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63009' 00:08:03.279 21:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 63009 00:08:03.279 [2024-09-29 21:39:22.015124] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.279 21:39:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 63009 00:08:03.279 [2024-09-29 21:39:22.032898] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.658 21:39:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:04.658 00:08:04.658 real 0m5.148s 00:08:04.658 user 0m7.089s 00:08:04.658 sys 0m0.934s 00:08:04.658 ************************************ 00:08:04.658 END TEST raid_state_function_test_sb 00:08:04.658 ************************************ 00:08:04.658 21:39:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.658 21:39:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.658 21:39:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:04.658 21:39:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:04.658 21:39:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.658 21:39:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.658 ************************************ 00:08:04.658 START TEST raid_superblock_test 00:08:04.658 ************************************ 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:04.658 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63261 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63261 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63261 ']' 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:04.659 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.659 [2024-09-29 21:39:23.531220] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:04.659 [2024-09-29 21:39:23.531420] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63261 ] 00:08:04.918 [2024-09-29 21:39:23.693787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.177 [2024-09-29 21:39:23.942443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.436 [2024-09-29 21:39:24.167136] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.436 [2024-09-29 21:39:24.167239] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.436 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.436 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:05.436 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:05.436 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:05.436 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 malloc1 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 [2024-09-29 21:39:24.409239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:05.437 [2024-09-29 21:39:24.409399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.437 [2024-09-29 21:39:24.409453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:05.437 [2024-09-29 21:39:24.409514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.437 [2024-09-29 21:39:24.411842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.437 [2024-09-29 21:39:24.411928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:05.437 pt1 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.437 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.696 malloc2 00:08:05.696 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.696 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:05.696 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.697 [2024-09-29 21:39:24.504281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:05.697 [2024-09-29 21:39:24.504343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:05.697 [2024-09-29 21:39:24.504380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:05.697 [2024-09-29 21:39:24.504389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:05.697 [2024-09-29 21:39:24.506727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:05.697 [2024-09-29 21:39:24.506763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:05.697 pt2 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.697 [2024-09-29 21:39:24.516340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:05.697 [2024-09-29 21:39:24.518410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:05.697 [2024-09-29 21:39:24.518567] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:05.697 [2024-09-29 21:39:24.518580] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:05.697 [2024-09-29 21:39:24.518805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:05.697 [2024-09-29 21:39:24.518963] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:05.697 [2024-09-29 21:39:24.518975] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:05.697 [2024-09-29 21:39:24.519129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.697 "name": "raid_bdev1", 00:08:05.697 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:05.697 "strip_size_kb": 0, 00:08:05.697 "state": "online", 00:08:05.697 "raid_level": "raid1", 00:08:05.697 "superblock": true, 00:08:05.697 "num_base_bdevs": 2, 00:08:05.697 "num_base_bdevs_discovered": 2, 00:08:05.697 "num_base_bdevs_operational": 2, 00:08:05.697 "base_bdevs_list": [ 00:08:05.697 { 00:08:05.697 "name": "pt1", 00:08:05.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:05.697 "is_configured": true, 00:08:05.697 "data_offset": 2048, 00:08:05.697 "data_size": 63488 00:08:05.697 }, 00:08:05.697 { 00:08:05.697 "name": "pt2", 00:08:05.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:05.697 "is_configured": true, 00:08:05.697 "data_offset": 2048, 00:08:05.697 "data_size": 63488 00:08:05.697 } 00:08:05.697 ] 00:08:05.697 }' 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.697 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.267 [2024-09-29 21:39:24.951773] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.267 "name": "raid_bdev1", 00:08:06.267 "aliases": [ 00:08:06.267 "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf" 00:08:06.267 ], 00:08:06.267 "product_name": "Raid Volume", 00:08:06.267 "block_size": 512, 00:08:06.267 "num_blocks": 63488, 00:08:06.267 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:06.267 "assigned_rate_limits": { 00:08:06.267 "rw_ios_per_sec": 0, 00:08:06.267 "rw_mbytes_per_sec": 0, 00:08:06.267 "r_mbytes_per_sec": 0, 00:08:06.267 "w_mbytes_per_sec": 0 00:08:06.267 }, 00:08:06.267 "claimed": false, 00:08:06.267 "zoned": false, 00:08:06.267 "supported_io_types": { 00:08:06.267 "read": true, 00:08:06.267 "write": true, 00:08:06.267 "unmap": false, 00:08:06.267 "flush": false, 00:08:06.267 "reset": true, 00:08:06.267 "nvme_admin": false, 00:08:06.267 "nvme_io": false, 00:08:06.267 "nvme_io_md": false, 00:08:06.267 "write_zeroes": true, 00:08:06.267 "zcopy": false, 00:08:06.267 "get_zone_info": false, 00:08:06.267 "zone_management": false, 00:08:06.267 "zone_append": false, 00:08:06.267 "compare": false, 00:08:06.267 "compare_and_write": false, 00:08:06.267 "abort": false, 00:08:06.267 "seek_hole": false, 00:08:06.267 "seek_data": false, 00:08:06.267 "copy": false, 00:08:06.267 "nvme_iov_md": false 00:08:06.267 }, 00:08:06.267 "memory_domains": [ 00:08:06.267 { 00:08:06.267 "dma_device_id": "system", 00:08:06.267 "dma_device_type": 1 00:08:06.267 }, 00:08:06.267 { 00:08:06.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.267 "dma_device_type": 2 00:08:06.267 }, 00:08:06.267 { 00:08:06.267 "dma_device_id": "system", 00:08:06.267 "dma_device_type": 1 00:08:06.267 }, 00:08:06.267 { 00:08:06.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.267 "dma_device_type": 2 00:08:06.267 } 00:08:06.267 ], 00:08:06.267 "driver_specific": { 00:08:06.267 "raid": { 00:08:06.267 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:06.267 "strip_size_kb": 0, 00:08:06.267 "state": "online", 00:08:06.267 "raid_level": "raid1", 00:08:06.267 "superblock": true, 00:08:06.267 "num_base_bdevs": 2, 00:08:06.267 "num_base_bdevs_discovered": 2, 00:08:06.267 "num_base_bdevs_operational": 2, 00:08:06.267 "base_bdevs_list": [ 00:08:06.267 { 00:08:06.267 "name": "pt1", 00:08:06.267 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.267 "is_configured": true, 00:08:06.267 "data_offset": 2048, 00:08:06.267 "data_size": 63488 00:08:06.267 }, 00:08:06.267 { 00:08:06.267 "name": "pt2", 00:08:06.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.267 "is_configured": true, 00:08:06.267 "data_offset": 2048, 00:08:06.267 "data_size": 63488 00:08:06.267 } 00:08:06.267 ] 00:08:06.267 } 00:08:06.267 } 00:08:06.267 }' 00:08:06.267 21:39:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:06.267 pt2' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.267 [2024-09-29 21:39:25.187313] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e140a1f7-9eb3-475e-a2dc-c4106eb6aadf 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e140a1f7-9eb3-475e-a2dc-c4106eb6aadf ']' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.267 [2024-09-29 21:39:25.231009] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.267 [2024-09-29 21:39:25.231089] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.267 [2024-09-29 21:39:25.231169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.267 [2024-09-29 21:39:25.231241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.267 [2024-09-29 21:39:25.231253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.267 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.528 [2024-09-29 21:39:25.338845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:06.528 [2024-09-29 21:39:25.340984] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:06.528 [2024-09-29 21:39:25.341103] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:06.528 [2024-09-29 21:39:25.341202] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:06.528 [2024-09-29 21:39:25.341251] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.528 [2024-09-29 21:39:25.341297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:06.528 request: 00:08:06.528 { 00:08:06.528 "name": "raid_bdev1", 00:08:06.528 "raid_level": "raid1", 00:08:06.528 "base_bdevs": [ 00:08:06.528 "malloc1", 00:08:06.528 "malloc2" 00:08:06.528 ], 00:08:06.528 "superblock": false, 00:08:06.528 "method": "bdev_raid_create", 00:08:06.528 "req_id": 1 00:08:06.528 } 00:08:06.528 Got JSON-RPC error response 00:08:06.528 response: 00:08:06.528 { 00:08:06.528 "code": -17, 00:08:06.528 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:06.528 } 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.528 [2024-09-29 21:39:25.406695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:06.528 [2024-09-29 21:39:25.406743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.528 [2024-09-29 21:39:25.406757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:06.528 [2024-09-29 21:39:25.406768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.528 [2024-09-29 21:39:25.409181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.528 [2024-09-29 21:39:25.409217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:06.528 [2024-09-29 21:39:25.409285] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:06.528 [2024-09-29 21:39:25.409344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:06.528 pt1 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.528 "name": "raid_bdev1", 00:08:06.528 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:06.528 "strip_size_kb": 0, 00:08:06.528 "state": "configuring", 00:08:06.528 "raid_level": "raid1", 00:08:06.528 "superblock": true, 00:08:06.528 "num_base_bdevs": 2, 00:08:06.528 "num_base_bdevs_discovered": 1, 00:08:06.528 "num_base_bdevs_operational": 2, 00:08:06.528 "base_bdevs_list": [ 00:08:06.528 { 00:08:06.528 "name": "pt1", 00:08:06.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:06.528 "is_configured": true, 00:08:06.528 "data_offset": 2048, 00:08:06.528 "data_size": 63488 00:08:06.528 }, 00:08:06.528 { 00:08:06.528 "name": null, 00:08:06.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:06.528 "is_configured": false, 00:08:06.528 "data_offset": 2048, 00:08:06.528 "data_size": 63488 00:08:06.528 } 00:08:06.528 ] 00:08:06.528 }' 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.528 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.097 [2024-09-29 21:39:25.861901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:07.097 [2024-09-29 21:39:25.862021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.097 [2024-09-29 21:39:25.862067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:07.097 [2024-09-29 21:39:25.862103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.097 [2024-09-29 21:39:25.862547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.097 [2024-09-29 21:39:25.862611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:07.097 [2024-09-29 21:39:25.862701] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:07.097 [2024-09-29 21:39:25.862749] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:07.097 [2024-09-29 21:39:25.862873] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.097 [2024-09-29 21:39:25.862912] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:07.097 [2024-09-29 21:39:25.863181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:07.097 [2024-09-29 21:39:25.863370] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.097 [2024-09-29 21:39:25.863411] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:07.097 [2024-09-29 21:39:25.863590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.097 pt2 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.097 "name": "raid_bdev1", 00:08:07.097 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:07.097 "strip_size_kb": 0, 00:08:07.097 "state": "online", 00:08:07.097 "raid_level": "raid1", 00:08:07.097 "superblock": true, 00:08:07.097 "num_base_bdevs": 2, 00:08:07.097 "num_base_bdevs_discovered": 2, 00:08:07.097 "num_base_bdevs_operational": 2, 00:08:07.097 "base_bdevs_list": [ 00:08:07.097 { 00:08:07.097 "name": "pt1", 00:08:07.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.097 "is_configured": true, 00:08:07.097 "data_offset": 2048, 00:08:07.097 "data_size": 63488 00:08:07.097 }, 00:08:07.097 { 00:08:07.097 "name": "pt2", 00:08:07.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.097 "is_configured": true, 00:08:07.097 "data_offset": 2048, 00:08:07.097 "data_size": 63488 00:08:07.097 } 00:08:07.097 ] 00:08:07.097 }' 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.097 21:39:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.356 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:07.357 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:07.357 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:07.357 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:07.357 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:07.357 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:07.357 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.357 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.357 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:07.357 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.357 [2024-09-29 21:39:26.333382] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.616 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.616 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:07.616 "name": "raid_bdev1", 00:08:07.616 "aliases": [ 00:08:07.616 "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf" 00:08:07.616 ], 00:08:07.616 "product_name": "Raid Volume", 00:08:07.616 "block_size": 512, 00:08:07.616 "num_blocks": 63488, 00:08:07.616 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:07.616 "assigned_rate_limits": { 00:08:07.616 "rw_ios_per_sec": 0, 00:08:07.616 "rw_mbytes_per_sec": 0, 00:08:07.616 "r_mbytes_per_sec": 0, 00:08:07.616 "w_mbytes_per_sec": 0 00:08:07.616 }, 00:08:07.616 "claimed": false, 00:08:07.616 "zoned": false, 00:08:07.616 "supported_io_types": { 00:08:07.616 "read": true, 00:08:07.616 "write": true, 00:08:07.616 "unmap": false, 00:08:07.616 "flush": false, 00:08:07.616 "reset": true, 00:08:07.616 "nvme_admin": false, 00:08:07.616 "nvme_io": false, 00:08:07.616 "nvme_io_md": false, 00:08:07.616 "write_zeroes": true, 00:08:07.616 "zcopy": false, 00:08:07.616 "get_zone_info": false, 00:08:07.616 "zone_management": false, 00:08:07.616 "zone_append": false, 00:08:07.616 "compare": false, 00:08:07.616 "compare_and_write": false, 00:08:07.616 "abort": false, 00:08:07.616 "seek_hole": false, 00:08:07.616 "seek_data": false, 00:08:07.616 "copy": false, 00:08:07.616 "nvme_iov_md": false 00:08:07.616 }, 00:08:07.616 "memory_domains": [ 00:08:07.616 { 00:08:07.616 "dma_device_id": "system", 00:08:07.616 "dma_device_type": 1 00:08:07.616 }, 00:08:07.616 { 00:08:07.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.616 "dma_device_type": 2 00:08:07.616 }, 00:08:07.616 { 00:08:07.616 "dma_device_id": "system", 00:08:07.616 "dma_device_type": 1 00:08:07.616 }, 00:08:07.616 { 00:08:07.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.616 "dma_device_type": 2 00:08:07.616 } 00:08:07.616 ], 00:08:07.616 "driver_specific": { 00:08:07.616 "raid": { 00:08:07.616 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:07.616 "strip_size_kb": 0, 00:08:07.616 "state": "online", 00:08:07.616 "raid_level": "raid1", 00:08:07.616 "superblock": true, 00:08:07.616 "num_base_bdevs": 2, 00:08:07.616 "num_base_bdevs_discovered": 2, 00:08:07.616 "num_base_bdevs_operational": 2, 00:08:07.616 "base_bdevs_list": [ 00:08:07.616 { 00:08:07.616 "name": "pt1", 00:08:07.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:07.616 "is_configured": true, 00:08:07.616 "data_offset": 2048, 00:08:07.616 "data_size": 63488 00:08:07.616 }, 00:08:07.616 { 00:08:07.616 "name": "pt2", 00:08:07.616 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.616 "is_configured": true, 00:08:07.616 "data_offset": 2048, 00:08:07.616 "data_size": 63488 00:08:07.616 } 00:08:07.616 ] 00:08:07.616 } 00:08:07.616 } 00:08:07.616 }' 00:08:07.616 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:07.617 pt2' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.617 [2024-09-29 21:39:26.560947] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e140a1f7-9eb3-475e-a2dc-c4106eb6aadf '!=' e140a1f7-9eb3-475e-a2dc-c4106eb6aadf ']' 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.617 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.877 [2024-09-29 21:39:26.604718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.877 "name": "raid_bdev1", 00:08:07.877 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:07.877 "strip_size_kb": 0, 00:08:07.877 "state": "online", 00:08:07.877 "raid_level": "raid1", 00:08:07.877 "superblock": true, 00:08:07.877 "num_base_bdevs": 2, 00:08:07.877 "num_base_bdevs_discovered": 1, 00:08:07.877 "num_base_bdevs_operational": 1, 00:08:07.877 "base_bdevs_list": [ 00:08:07.877 { 00:08:07.877 "name": null, 00:08:07.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.877 "is_configured": false, 00:08:07.877 "data_offset": 0, 00:08:07.877 "data_size": 63488 00:08:07.877 }, 00:08:07.877 { 00:08:07.877 "name": "pt2", 00:08:07.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:07.877 "is_configured": true, 00:08:07.877 "data_offset": 2048, 00:08:07.877 "data_size": 63488 00:08:07.877 } 00:08:07.877 ] 00:08:07.877 }' 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.877 21:39:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.137 [2024-09-29 21:39:27.031931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.137 [2024-09-29 21:39:27.032005] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.137 [2024-09-29 21:39:27.032109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.137 [2024-09-29 21:39:27.032172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.137 [2024-09-29 21:39:27.032277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.137 [2024-09-29 21:39:27.103819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:08.137 [2024-09-29 21:39:27.103909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.137 [2024-09-29 21:39:27.103940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:08.137 [2024-09-29 21:39:27.103971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.137 [2024-09-29 21:39:27.106443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.137 [2024-09-29 21:39:27.106529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:08.137 [2024-09-29 21:39:27.106629] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:08.137 [2024-09-29 21:39:27.106719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.137 [2024-09-29 21:39:27.106895] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:08.137 [2024-09-29 21:39:27.106941] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:08.137 [2024-09-29 21:39:27.107191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:08.137 [2024-09-29 21:39:27.107376] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:08.137 [2024-09-29 21:39:27.107418] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:08.137 [2024-09-29 21:39:27.107592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.137 pt2 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.137 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.397 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.397 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.397 "name": "raid_bdev1", 00:08:08.397 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:08.397 "strip_size_kb": 0, 00:08:08.397 "state": "online", 00:08:08.397 "raid_level": "raid1", 00:08:08.397 "superblock": true, 00:08:08.397 "num_base_bdevs": 2, 00:08:08.397 "num_base_bdevs_discovered": 1, 00:08:08.397 "num_base_bdevs_operational": 1, 00:08:08.397 "base_bdevs_list": [ 00:08:08.397 { 00:08:08.397 "name": null, 00:08:08.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.397 "is_configured": false, 00:08:08.397 "data_offset": 2048, 00:08:08.397 "data_size": 63488 00:08:08.397 }, 00:08:08.397 { 00:08:08.397 "name": "pt2", 00:08:08.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.397 "is_configured": true, 00:08:08.397 "data_offset": 2048, 00:08:08.397 "data_size": 63488 00:08:08.397 } 00:08:08.397 ] 00:08:08.397 }' 00:08:08.397 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.397 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.656 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.656 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.656 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.656 [2024-09-29 21:39:27.539053] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.656 [2024-09-29 21:39:27.539081] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.656 [2024-09-29 21:39:27.539138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.657 [2024-09-29 21:39:27.539180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.657 [2024-09-29 21:39:27.539189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.657 [2024-09-29 21:39:27.602957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:08.657 [2024-09-29 21:39:27.603059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.657 [2024-09-29 21:39:27.603097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:08.657 [2024-09-29 21:39:27.603124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.657 [2024-09-29 21:39:27.605614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.657 [2024-09-29 21:39:27.605684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:08.657 [2024-09-29 21:39:27.605779] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:08.657 [2024-09-29 21:39:27.605843] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:08.657 [2024-09-29 21:39:27.605984] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:08.657 [2024-09-29 21:39:27.606044] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.657 [2024-09-29 21:39:27.606083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:08.657 [2024-09-29 21:39:27.606191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:08.657 [2024-09-29 21:39:27.606298] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:08.657 [2024-09-29 21:39:27.606333] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:08.657 [2024-09-29 21:39:27.606578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:08.657 [2024-09-29 21:39:27.606754] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:08.657 [2024-09-29 21:39:27.606799] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:08.657 [2024-09-29 21:39:27.607003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.657 pt1 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.657 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.916 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.916 "name": "raid_bdev1", 00:08:08.916 "uuid": "e140a1f7-9eb3-475e-a2dc-c4106eb6aadf", 00:08:08.916 "strip_size_kb": 0, 00:08:08.916 "state": "online", 00:08:08.916 "raid_level": "raid1", 00:08:08.916 "superblock": true, 00:08:08.916 "num_base_bdevs": 2, 00:08:08.916 "num_base_bdevs_discovered": 1, 00:08:08.916 "num_base_bdevs_operational": 1, 00:08:08.916 "base_bdevs_list": [ 00:08:08.916 { 00:08:08.916 "name": null, 00:08:08.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.916 "is_configured": false, 00:08:08.916 "data_offset": 2048, 00:08:08.916 "data_size": 63488 00:08:08.916 }, 00:08:08.916 { 00:08:08.916 "name": "pt2", 00:08:08.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:08.916 "is_configured": true, 00:08:08.916 "data_offset": 2048, 00:08:08.916 "data_size": 63488 00:08:08.916 } 00:08:08.916 ] 00:08:08.916 }' 00:08:08.916 21:39:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.916 21:39:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.176 [2024-09-29 21:39:28.098313] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e140a1f7-9eb3-475e-a2dc-c4106eb6aadf '!=' e140a1f7-9eb3-475e-a2dc-c4106eb6aadf ']' 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63261 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63261 ']' 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63261 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.176 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63261 00:08:09.436 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.436 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.436 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63261' 00:08:09.436 killing process with pid 63261 00:08:09.436 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63261 00:08:09.436 [2024-09-29 21:39:28.173422] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.436 [2024-09-29 21:39:28.173554] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.436 [2024-09-29 21:39:28.173624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 21:39:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63261 00:08:09.436 ee all in destruct 00:08:09.436 [2024-09-29 21:39:28.173675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:09.436 [2024-09-29 21:39:28.389862] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.818 21:39:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:10.818 00:08:10.818 real 0m6.269s 00:08:10.818 user 0m9.216s 00:08:10.818 sys 0m1.175s 00:08:10.818 21:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.818 21:39:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.818 ************************************ 00:08:10.818 END TEST raid_superblock_test 00:08:10.818 ************************************ 00:08:10.818 21:39:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:10.818 21:39:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:10.818 21:39:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.818 21:39:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.818 ************************************ 00:08:10.818 START TEST raid_read_error_test 00:08:10.818 ************************************ 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:10.818 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0pe9FnCfz0 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63591 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63591 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63591 ']' 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.079 21:39:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.079 [2024-09-29 21:39:29.898544] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:11.079 [2024-09-29 21:39:29.898675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63591 ] 00:08:11.340 [2024-09-29 21:39:30.063896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.340 [2024-09-29 21:39:30.311352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.606 [2024-09-29 21:39:30.529146] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.606 [2024-09-29 21:39:30.529185] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.866 BaseBdev1_malloc 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.866 true 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.866 [2024-09-29 21:39:30.776847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:11.866 [2024-09-29 21:39:30.776915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.866 [2024-09-29 21:39:30.776934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:11.866 [2024-09-29 21:39:30.776946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.866 [2024-09-29 21:39:30.779304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.866 [2024-09-29 21:39:30.779433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:11.866 BaseBdev1 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:11.866 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.867 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.867 BaseBdev2_malloc 00:08:11.867 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.867 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.867 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.867 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.867 true 00:08:11.867 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.867 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.867 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.867 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.126 [2024-09-29 21:39:30.852636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:12.126 [2024-09-29 21:39:30.852698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.127 [2024-09-29 21:39:30.852716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:12.127 [2024-09-29 21:39:30.852727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.127 [2024-09-29 21:39:30.855145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.127 [2024-09-29 21:39:30.855254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:12.127 BaseBdev2 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.127 [2024-09-29 21:39:30.864687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.127 [2024-09-29 21:39:30.866748] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.127 [2024-09-29 21:39:30.866942] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.127 [2024-09-29 21:39:30.866956] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:12.127 [2024-09-29 21:39:30.867186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.127 [2024-09-29 21:39:30.867357] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.127 [2024-09-29 21:39:30.867388] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:12.127 [2024-09-29 21:39:30.867542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.127 "name": "raid_bdev1", 00:08:12.127 "uuid": "94179a4b-6583-4ae8-9a6a-ffb546745b51", 00:08:12.127 "strip_size_kb": 0, 00:08:12.127 "state": "online", 00:08:12.127 "raid_level": "raid1", 00:08:12.127 "superblock": true, 00:08:12.127 "num_base_bdevs": 2, 00:08:12.127 "num_base_bdevs_discovered": 2, 00:08:12.127 "num_base_bdevs_operational": 2, 00:08:12.127 "base_bdevs_list": [ 00:08:12.127 { 00:08:12.127 "name": "BaseBdev1", 00:08:12.127 "uuid": "26353cec-84c1-5f81-bc61-82f124c7cd5a", 00:08:12.127 "is_configured": true, 00:08:12.127 "data_offset": 2048, 00:08:12.127 "data_size": 63488 00:08:12.127 }, 00:08:12.127 { 00:08:12.127 "name": "BaseBdev2", 00:08:12.127 "uuid": "edaa21d5-d8fb-5d1a-986c-49bacf6824f2", 00:08:12.127 "is_configured": true, 00:08:12.127 "data_offset": 2048, 00:08:12.127 "data_size": 63488 00:08:12.127 } 00:08:12.127 ] 00:08:12.127 }' 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.127 21:39:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.387 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:12.387 21:39:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:12.387 [2024-09-29 21:39:31.365282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:13.326 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:13.326 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.326 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.586 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.586 "name": "raid_bdev1", 00:08:13.586 "uuid": "94179a4b-6583-4ae8-9a6a-ffb546745b51", 00:08:13.586 "strip_size_kb": 0, 00:08:13.586 "state": "online", 00:08:13.586 "raid_level": "raid1", 00:08:13.586 "superblock": true, 00:08:13.586 "num_base_bdevs": 2, 00:08:13.586 "num_base_bdevs_discovered": 2, 00:08:13.586 "num_base_bdevs_operational": 2, 00:08:13.586 "base_bdevs_list": [ 00:08:13.586 { 00:08:13.586 "name": "BaseBdev1", 00:08:13.586 "uuid": "26353cec-84c1-5f81-bc61-82f124c7cd5a", 00:08:13.586 "is_configured": true, 00:08:13.586 "data_offset": 2048, 00:08:13.586 "data_size": 63488 00:08:13.586 }, 00:08:13.586 { 00:08:13.586 "name": "BaseBdev2", 00:08:13.587 "uuid": "edaa21d5-d8fb-5d1a-986c-49bacf6824f2", 00:08:13.587 "is_configured": true, 00:08:13.587 "data_offset": 2048, 00:08:13.587 "data_size": 63488 00:08:13.587 } 00:08:13.587 ] 00:08:13.587 }' 00:08:13.587 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.587 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.855 [2024-09-29 21:39:32.775615] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.855 [2024-09-29 21:39:32.775670] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.855 [2024-09-29 21:39:32.778386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.855 [2024-09-29 21:39:32.778469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.855 [2024-09-29 21:39:32.778584] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.855 [2024-09-29 21:39:32.778634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:13.855 { 00:08:13.855 "results": [ 00:08:13.855 { 00:08:13.855 "job": "raid_bdev1", 00:08:13.855 "core_mask": "0x1", 00:08:13.855 "workload": "randrw", 00:08:13.855 "percentage": 50, 00:08:13.855 "status": "finished", 00:08:13.855 "queue_depth": 1, 00:08:13.855 "io_size": 131072, 00:08:13.855 "runtime": 1.410943, 00:08:13.855 "iops": 14994.227265027715, 00:08:13.855 "mibps": 1874.2784081284644, 00:08:13.855 "io_failed": 0, 00:08:13.855 "io_timeout": 0, 00:08:13.855 "avg_latency_us": 64.26941241647616, 00:08:13.855 "min_latency_us": 22.134497816593885, 00:08:13.855 "max_latency_us": 1352.216593886463 00:08:13.855 } 00:08:13.855 ], 00:08:13.855 "core_count": 1 00:08:13.855 } 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63591 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63591 ']' 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63591 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63591 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63591' 00:08:13.855 killing process with pid 63591 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63591 00:08:13.855 [2024-09-29 21:39:32.817893] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.855 21:39:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63591 00:08:14.130 [2024-09-29 21:39:32.964806] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0pe9FnCfz0 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:15.519 00:08:15.519 real 0m4.567s 00:08:15.519 user 0m5.250s 00:08:15.519 sys 0m0.658s 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.519 ************************************ 00:08:15.519 END TEST raid_read_error_test 00:08:15.519 ************************************ 00:08:15.519 21:39:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.519 21:39:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:15.519 21:39:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:15.519 21:39:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.519 21:39:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.519 ************************************ 00:08:15.519 START TEST raid_write_error_test 00:08:15.519 ************************************ 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:15.519 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Qx8Hv4iWjk 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63738 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63738 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63738 ']' 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.520 21:39:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.780 [2024-09-29 21:39:34.539329] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:15.780 [2024-09-29 21:39:34.539467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63738 ] 00:08:15.780 [2024-09-29 21:39:34.702734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.039 [2024-09-29 21:39:34.942988] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.298 [2024-09-29 21:39:35.171838] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.298 [2024-09-29 21:39:35.171881] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.558 BaseBdev1_malloc 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.558 true 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.558 [2024-09-29 21:39:35.422327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:16.558 [2024-09-29 21:39:35.422461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.558 [2024-09-29 21:39:35.422498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:16.558 [2024-09-29 21:39:35.422529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.558 [2024-09-29 21:39:35.424958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.558 [2024-09-29 21:39:35.425048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:16.558 BaseBdev1 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.558 BaseBdev2_malloc 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.558 true 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.558 [2024-09-29 21:39:35.529236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:16.558 [2024-09-29 21:39:35.529291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.558 [2024-09-29 21:39:35.529308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:16.558 [2024-09-29 21:39:35.529319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.558 [2024-09-29 21:39:35.531668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.558 [2024-09-29 21:39:35.531708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:16.558 BaseBdev2 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.558 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.818 [2024-09-29 21:39:35.541294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.818 [2024-09-29 21:39:35.543357] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.818 [2024-09-29 21:39:35.543548] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:16.818 [2024-09-29 21:39:35.543575] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:16.818 [2024-09-29 21:39:35.543798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:16.818 [2024-09-29 21:39:35.543970] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:16.818 [2024-09-29 21:39:35.543980] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:16.818 [2024-09-29 21:39:35.544224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.818 "name": "raid_bdev1", 00:08:16.818 "uuid": "92a6ecad-dfc3-45b1-be70-64cc4a031ed4", 00:08:16.818 "strip_size_kb": 0, 00:08:16.818 "state": "online", 00:08:16.818 "raid_level": "raid1", 00:08:16.818 "superblock": true, 00:08:16.818 "num_base_bdevs": 2, 00:08:16.818 "num_base_bdevs_discovered": 2, 00:08:16.818 "num_base_bdevs_operational": 2, 00:08:16.818 "base_bdevs_list": [ 00:08:16.818 { 00:08:16.818 "name": "BaseBdev1", 00:08:16.818 "uuid": "7f1826df-5b06-5991-947c-790618a9f873", 00:08:16.818 "is_configured": true, 00:08:16.818 "data_offset": 2048, 00:08:16.818 "data_size": 63488 00:08:16.818 }, 00:08:16.818 { 00:08:16.818 "name": "BaseBdev2", 00:08:16.818 "uuid": "f187d665-cf49-5e48-97c2-ec96a1596fc6", 00:08:16.818 "is_configured": true, 00:08:16.818 "data_offset": 2048, 00:08:16.818 "data_size": 63488 00:08:16.818 } 00:08:16.818 ] 00:08:16.818 }' 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.818 21:39:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.078 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.078 21:39:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.078 [2024-09-29 21:39:36.049881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.016 [2024-09-29 21:39:36.967997] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:18.016 [2024-09-29 21:39:36.968089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.016 [2024-09-29 21:39:36.968310] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.016 21:39:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.276 21:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.276 "name": "raid_bdev1", 00:08:18.276 "uuid": "92a6ecad-dfc3-45b1-be70-64cc4a031ed4", 00:08:18.276 "strip_size_kb": 0, 00:08:18.276 "state": "online", 00:08:18.276 "raid_level": "raid1", 00:08:18.276 "superblock": true, 00:08:18.276 "num_base_bdevs": 2, 00:08:18.276 "num_base_bdevs_discovered": 1, 00:08:18.276 "num_base_bdevs_operational": 1, 00:08:18.276 "base_bdevs_list": [ 00:08:18.276 { 00:08:18.276 "name": null, 00:08:18.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.276 "is_configured": false, 00:08:18.276 "data_offset": 0, 00:08:18.276 "data_size": 63488 00:08:18.276 }, 00:08:18.276 { 00:08:18.276 "name": "BaseBdev2", 00:08:18.276 "uuid": "f187d665-cf49-5e48-97c2-ec96a1596fc6", 00:08:18.276 "is_configured": true, 00:08:18.276 "data_offset": 2048, 00:08:18.276 "data_size": 63488 00:08:18.276 } 00:08:18.276 ] 00:08:18.276 }' 00:08:18.276 21:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.276 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.536 [2024-09-29 21:39:37.397004] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.536 [2024-09-29 21:39:37.397126] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.536 [2024-09-29 21:39:37.399716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.536 [2024-09-29 21:39:37.399806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.536 [2024-09-29 21:39:37.399888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.536 [2024-09-29 21:39:37.399940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:18.536 { 00:08:18.536 "results": [ 00:08:18.536 { 00:08:18.536 "job": "raid_bdev1", 00:08:18.536 "core_mask": "0x1", 00:08:18.536 "workload": "randrw", 00:08:18.536 "percentage": 50, 00:08:18.536 "status": "finished", 00:08:18.536 "queue_depth": 1, 00:08:18.536 "io_size": 131072, 00:08:18.536 "runtime": 1.347752, 00:08:18.536 "iops": 18911.49113486754, 00:08:18.536 "mibps": 2363.9363918584427, 00:08:18.536 "io_failed": 0, 00:08:18.536 "io_timeout": 0, 00:08:18.536 "avg_latency_us": 50.49097443235553, 00:08:18.536 "min_latency_us": 20.90480349344978, 00:08:18.536 "max_latency_us": 1316.4436681222708 00:08:18.536 } 00:08:18.536 ], 00:08:18.536 "core_count": 1 00:08:18.536 } 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63738 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63738 ']' 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63738 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63738 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.536 killing process with pid 63738 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63738' 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63738 00:08:18.536 [2024-09-29 21:39:37.451023] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.536 21:39:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63738 00:08:18.796 [2024-09-29 21:39:37.595305] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Qx8Hv4iWjk 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:20.180 ************************************ 00:08:20.180 END TEST raid_write_error_test 00:08:20.180 ************************************ 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:20.180 00:08:20.180 real 0m4.549s 00:08:20.180 user 0m5.198s 00:08:20.180 sys 0m0.675s 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.180 21:39:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 21:39:39 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:20.180 21:39:39 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:20.180 21:39:39 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:20.180 21:39:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:20.180 21:39:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.180 21:39:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.180 ************************************ 00:08:20.180 START TEST raid_state_function_test 00:08:20.180 ************************************ 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:20.180 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63881 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63881' 00:08:20.181 Process raid pid: 63881 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63881 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63881 ']' 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.181 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.181 [2024-09-29 21:39:39.150265] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:20.181 [2024-09-29 21:39:39.150466] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.451 [2024-09-29 21:39:39.316371] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.711 [2024-09-29 21:39:39.558187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.970 [2024-09-29 21:39:39.791957] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.970 [2024-09-29 21:39:39.792121] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.229 [2024-09-29 21:39:39.993470] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.229 [2024-09-29 21:39:39.993617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.229 [2024-09-29 21:39:39.993649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.229 [2024-09-29 21:39:39.993673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.229 [2024-09-29 21:39:39.993700] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.229 [2024-09-29 21:39:39.993738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.229 21:39:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.229 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.229 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.229 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.229 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.229 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.229 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.229 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.229 "name": "Existed_Raid", 00:08:21.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.229 "strip_size_kb": 64, 00:08:21.229 "state": "configuring", 00:08:21.229 "raid_level": "raid0", 00:08:21.229 "superblock": false, 00:08:21.229 "num_base_bdevs": 3, 00:08:21.229 "num_base_bdevs_discovered": 0, 00:08:21.229 "num_base_bdevs_operational": 3, 00:08:21.229 "base_bdevs_list": [ 00:08:21.229 { 00:08:21.229 "name": "BaseBdev1", 00:08:21.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.229 "is_configured": false, 00:08:21.229 "data_offset": 0, 00:08:21.229 "data_size": 0 00:08:21.229 }, 00:08:21.229 { 00:08:21.229 "name": "BaseBdev2", 00:08:21.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.229 "is_configured": false, 00:08:21.229 "data_offset": 0, 00:08:21.229 "data_size": 0 00:08:21.229 }, 00:08:21.229 { 00:08:21.229 "name": "BaseBdev3", 00:08:21.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.229 "is_configured": false, 00:08:21.229 "data_offset": 0, 00:08:21.229 "data_size": 0 00:08:21.229 } 00:08:21.229 ] 00:08:21.229 }' 00:08:21.229 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.230 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.490 [2024-09-29 21:39:40.316829] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.490 [2024-09-29 21:39:40.316916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.490 [2024-09-29 21:39:40.328841] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.490 [2024-09-29 21:39:40.328925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.490 [2024-09-29 21:39:40.328952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.490 [2024-09-29 21:39:40.328974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.490 [2024-09-29 21:39:40.328992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:21.490 [2024-09-29 21:39:40.329013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.490 [2024-09-29 21:39:40.391961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.490 BaseBdev1 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.490 [ 00:08:21.490 { 00:08:21.490 "name": "BaseBdev1", 00:08:21.490 "aliases": [ 00:08:21.490 "ffde08bb-4759-47f9-b158-132959784c81" 00:08:21.490 ], 00:08:21.490 "product_name": "Malloc disk", 00:08:21.490 "block_size": 512, 00:08:21.490 "num_blocks": 65536, 00:08:21.490 "uuid": "ffde08bb-4759-47f9-b158-132959784c81", 00:08:21.490 "assigned_rate_limits": { 00:08:21.490 "rw_ios_per_sec": 0, 00:08:21.490 "rw_mbytes_per_sec": 0, 00:08:21.490 "r_mbytes_per_sec": 0, 00:08:21.490 "w_mbytes_per_sec": 0 00:08:21.490 }, 00:08:21.490 "claimed": true, 00:08:21.490 "claim_type": "exclusive_write", 00:08:21.490 "zoned": false, 00:08:21.490 "supported_io_types": { 00:08:21.490 "read": true, 00:08:21.490 "write": true, 00:08:21.490 "unmap": true, 00:08:21.490 "flush": true, 00:08:21.490 "reset": true, 00:08:21.490 "nvme_admin": false, 00:08:21.490 "nvme_io": false, 00:08:21.490 "nvme_io_md": false, 00:08:21.490 "write_zeroes": true, 00:08:21.490 "zcopy": true, 00:08:21.490 "get_zone_info": false, 00:08:21.490 "zone_management": false, 00:08:21.490 "zone_append": false, 00:08:21.490 "compare": false, 00:08:21.490 "compare_and_write": false, 00:08:21.490 "abort": true, 00:08:21.490 "seek_hole": false, 00:08:21.490 "seek_data": false, 00:08:21.490 "copy": true, 00:08:21.490 "nvme_iov_md": false 00:08:21.490 }, 00:08:21.490 "memory_domains": [ 00:08:21.490 { 00:08:21.490 "dma_device_id": "system", 00:08:21.490 "dma_device_type": 1 00:08:21.490 }, 00:08:21.490 { 00:08:21.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.490 "dma_device_type": 2 00:08:21.490 } 00:08:21.490 ], 00:08:21.490 "driver_specific": {} 00:08:21.490 } 00:08:21.490 ] 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.490 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.491 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.491 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.491 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.491 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.491 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.750 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.750 "name": "Existed_Raid", 00:08:21.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.751 "strip_size_kb": 64, 00:08:21.751 "state": "configuring", 00:08:21.751 "raid_level": "raid0", 00:08:21.751 "superblock": false, 00:08:21.751 "num_base_bdevs": 3, 00:08:21.751 "num_base_bdevs_discovered": 1, 00:08:21.751 "num_base_bdevs_operational": 3, 00:08:21.751 "base_bdevs_list": [ 00:08:21.751 { 00:08:21.751 "name": "BaseBdev1", 00:08:21.751 "uuid": "ffde08bb-4759-47f9-b158-132959784c81", 00:08:21.751 "is_configured": true, 00:08:21.751 "data_offset": 0, 00:08:21.751 "data_size": 65536 00:08:21.751 }, 00:08:21.751 { 00:08:21.751 "name": "BaseBdev2", 00:08:21.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.751 "is_configured": false, 00:08:21.751 "data_offset": 0, 00:08:21.751 "data_size": 0 00:08:21.751 }, 00:08:21.751 { 00:08:21.751 "name": "BaseBdev3", 00:08:21.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.751 "is_configured": false, 00:08:21.751 "data_offset": 0, 00:08:21.751 "data_size": 0 00:08:21.751 } 00:08:21.751 ] 00:08:21.751 }' 00:08:21.751 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.751 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.011 [2024-09-29 21:39:40.843190] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.011 [2024-09-29 21:39:40.843301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.011 [2024-09-29 21:39:40.855210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.011 [2024-09-29 21:39:40.857405] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.011 [2024-09-29 21:39:40.857502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.011 [2024-09-29 21:39:40.857531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:22.011 [2024-09-29 21:39:40.857564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.011 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.012 "name": "Existed_Raid", 00:08:22.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.012 "strip_size_kb": 64, 00:08:22.012 "state": "configuring", 00:08:22.012 "raid_level": "raid0", 00:08:22.012 "superblock": false, 00:08:22.012 "num_base_bdevs": 3, 00:08:22.012 "num_base_bdevs_discovered": 1, 00:08:22.012 "num_base_bdevs_operational": 3, 00:08:22.012 "base_bdevs_list": [ 00:08:22.012 { 00:08:22.012 "name": "BaseBdev1", 00:08:22.012 "uuid": "ffde08bb-4759-47f9-b158-132959784c81", 00:08:22.012 "is_configured": true, 00:08:22.012 "data_offset": 0, 00:08:22.012 "data_size": 65536 00:08:22.012 }, 00:08:22.012 { 00:08:22.012 "name": "BaseBdev2", 00:08:22.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.012 "is_configured": false, 00:08:22.012 "data_offset": 0, 00:08:22.012 "data_size": 0 00:08:22.012 }, 00:08:22.012 { 00:08:22.012 "name": "BaseBdev3", 00:08:22.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.012 "is_configured": false, 00:08:22.012 "data_offset": 0, 00:08:22.012 "data_size": 0 00:08:22.012 } 00:08:22.012 ] 00:08:22.012 }' 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.012 21:39:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 [2024-09-29 21:39:41.346209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.580 BaseBdev2 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 [ 00:08:22.580 { 00:08:22.580 "name": "BaseBdev2", 00:08:22.580 "aliases": [ 00:08:22.580 "a7fd6f60-010a-453f-9572-5fb5eca60690" 00:08:22.580 ], 00:08:22.580 "product_name": "Malloc disk", 00:08:22.580 "block_size": 512, 00:08:22.580 "num_blocks": 65536, 00:08:22.580 "uuid": "a7fd6f60-010a-453f-9572-5fb5eca60690", 00:08:22.580 "assigned_rate_limits": { 00:08:22.580 "rw_ios_per_sec": 0, 00:08:22.580 "rw_mbytes_per_sec": 0, 00:08:22.580 "r_mbytes_per_sec": 0, 00:08:22.580 "w_mbytes_per_sec": 0 00:08:22.580 }, 00:08:22.580 "claimed": true, 00:08:22.580 "claim_type": "exclusive_write", 00:08:22.580 "zoned": false, 00:08:22.580 "supported_io_types": { 00:08:22.580 "read": true, 00:08:22.580 "write": true, 00:08:22.580 "unmap": true, 00:08:22.580 "flush": true, 00:08:22.580 "reset": true, 00:08:22.580 "nvme_admin": false, 00:08:22.580 "nvme_io": false, 00:08:22.580 "nvme_io_md": false, 00:08:22.580 "write_zeroes": true, 00:08:22.580 "zcopy": true, 00:08:22.580 "get_zone_info": false, 00:08:22.580 "zone_management": false, 00:08:22.580 "zone_append": false, 00:08:22.580 "compare": false, 00:08:22.580 "compare_and_write": false, 00:08:22.580 "abort": true, 00:08:22.580 "seek_hole": false, 00:08:22.580 "seek_data": false, 00:08:22.580 "copy": true, 00:08:22.580 "nvme_iov_md": false 00:08:22.580 }, 00:08:22.580 "memory_domains": [ 00:08:22.580 { 00:08:22.580 "dma_device_id": "system", 00:08:22.580 "dma_device_type": 1 00:08:22.580 }, 00:08:22.580 { 00:08:22.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.580 "dma_device_type": 2 00:08:22.580 } 00:08:22.580 ], 00:08:22.580 "driver_specific": {} 00:08:22.580 } 00:08:22.580 ] 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.580 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.580 "name": "Existed_Raid", 00:08:22.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.580 "strip_size_kb": 64, 00:08:22.580 "state": "configuring", 00:08:22.580 "raid_level": "raid0", 00:08:22.580 "superblock": false, 00:08:22.580 "num_base_bdevs": 3, 00:08:22.580 "num_base_bdevs_discovered": 2, 00:08:22.580 "num_base_bdevs_operational": 3, 00:08:22.580 "base_bdevs_list": [ 00:08:22.580 { 00:08:22.580 "name": "BaseBdev1", 00:08:22.580 "uuid": "ffde08bb-4759-47f9-b158-132959784c81", 00:08:22.580 "is_configured": true, 00:08:22.580 "data_offset": 0, 00:08:22.580 "data_size": 65536 00:08:22.580 }, 00:08:22.580 { 00:08:22.580 "name": "BaseBdev2", 00:08:22.580 "uuid": "a7fd6f60-010a-453f-9572-5fb5eca60690", 00:08:22.581 "is_configured": true, 00:08:22.581 "data_offset": 0, 00:08:22.581 "data_size": 65536 00:08:22.581 }, 00:08:22.581 { 00:08:22.581 "name": "BaseBdev3", 00:08:22.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.581 "is_configured": false, 00:08:22.581 "data_offset": 0, 00:08:22.581 "data_size": 0 00:08:22.581 } 00:08:22.581 ] 00:08:22.581 }' 00:08:22.581 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.581 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.148 [2024-09-29 21:39:41.876873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:23.148 [2024-09-29 21:39:41.877010] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.148 [2024-09-29 21:39:41.877069] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:23.148 [2024-09-29 21:39:41.877399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:23.148 [2024-09-29 21:39:41.877636] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.148 [2024-09-29 21:39:41.877679] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:23.148 [2024-09-29 21:39:41.877999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.148 BaseBdev3 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.148 [ 00:08:23.148 { 00:08:23.148 "name": "BaseBdev3", 00:08:23.148 "aliases": [ 00:08:23.148 "0dc428f0-dc9f-4ce3-8b0c-7e68df3d35b8" 00:08:23.148 ], 00:08:23.148 "product_name": "Malloc disk", 00:08:23.148 "block_size": 512, 00:08:23.148 "num_blocks": 65536, 00:08:23.148 "uuid": "0dc428f0-dc9f-4ce3-8b0c-7e68df3d35b8", 00:08:23.148 "assigned_rate_limits": { 00:08:23.148 "rw_ios_per_sec": 0, 00:08:23.148 "rw_mbytes_per_sec": 0, 00:08:23.148 "r_mbytes_per_sec": 0, 00:08:23.148 "w_mbytes_per_sec": 0 00:08:23.148 }, 00:08:23.148 "claimed": true, 00:08:23.148 "claim_type": "exclusive_write", 00:08:23.148 "zoned": false, 00:08:23.148 "supported_io_types": { 00:08:23.148 "read": true, 00:08:23.148 "write": true, 00:08:23.148 "unmap": true, 00:08:23.148 "flush": true, 00:08:23.148 "reset": true, 00:08:23.148 "nvme_admin": false, 00:08:23.148 "nvme_io": false, 00:08:23.148 "nvme_io_md": false, 00:08:23.148 "write_zeroes": true, 00:08:23.148 "zcopy": true, 00:08:23.148 "get_zone_info": false, 00:08:23.148 "zone_management": false, 00:08:23.148 "zone_append": false, 00:08:23.148 "compare": false, 00:08:23.148 "compare_and_write": false, 00:08:23.148 "abort": true, 00:08:23.148 "seek_hole": false, 00:08:23.148 "seek_data": false, 00:08:23.148 "copy": true, 00:08:23.148 "nvme_iov_md": false 00:08:23.148 }, 00:08:23.148 "memory_domains": [ 00:08:23.148 { 00:08:23.148 "dma_device_id": "system", 00:08:23.148 "dma_device_type": 1 00:08:23.148 }, 00:08:23.148 { 00:08:23.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.148 "dma_device_type": 2 00:08:23.148 } 00:08:23.148 ], 00:08:23.148 "driver_specific": {} 00:08:23.148 } 00:08:23.148 ] 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.148 "name": "Existed_Raid", 00:08:23.148 "uuid": "00d49617-a864-4371-a8d8-ea3739eaecfc", 00:08:23.148 "strip_size_kb": 64, 00:08:23.148 "state": "online", 00:08:23.148 "raid_level": "raid0", 00:08:23.148 "superblock": false, 00:08:23.148 "num_base_bdevs": 3, 00:08:23.148 "num_base_bdevs_discovered": 3, 00:08:23.148 "num_base_bdevs_operational": 3, 00:08:23.148 "base_bdevs_list": [ 00:08:23.148 { 00:08:23.148 "name": "BaseBdev1", 00:08:23.148 "uuid": "ffde08bb-4759-47f9-b158-132959784c81", 00:08:23.148 "is_configured": true, 00:08:23.148 "data_offset": 0, 00:08:23.148 "data_size": 65536 00:08:23.148 }, 00:08:23.148 { 00:08:23.148 "name": "BaseBdev2", 00:08:23.148 "uuid": "a7fd6f60-010a-453f-9572-5fb5eca60690", 00:08:23.148 "is_configured": true, 00:08:23.148 "data_offset": 0, 00:08:23.148 "data_size": 65536 00:08:23.148 }, 00:08:23.148 { 00:08:23.148 "name": "BaseBdev3", 00:08:23.148 "uuid": "0dc428f0-dc9f-4ce3-8b0c-7e68df3d35b8", 00:08:23.148 "is_configured": true, 00:08:23.148 "data_offset": 0, 00:08:23.148 "data_size": 65536 00:08:23.148 } 00:08:23.148 ] 00:08:23.148 }' 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.148 21:39:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.407 [2024-09-29 21:39:42.308535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.407 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.407 "name": "Existed_Raid", 00:08:23.407 "aliases": [ 00:08:23.407 "00d49617-a864-4371-a8d8-ea3739eaecfc" 00:08:23.407 ], 00:08:23.407 "product_name": "Raid Volume", 00:08:23.407 "block_size": 512, 00:08:23.407 "num_blocks": 196608, 00:08:23.407 "uuid": "00d49617-a864-4371-a8d8-ea3739eaecfc", 00:08:23.407 "assigned_rate_limits": { 00:08:23.407 "rw_ios_per_sec": 0, 00:08:23.407 "rw_mbytes_per_sec": 0, 00:08:23.407 "r_mbytes_per_sec": 0, 00:08:23.407 "w_mbytes_per_sec": 0 00:08:23.407 }, 00:08:23.407 "claimed": false, 00:08:23.407 "zoned": false, 00:08:23.407 "supported_io_types": { 00:08:23.407 "read": true, 00:08:23.407 "write": true, 00:08:23.407 "unmap": true, 00:08:23.407 "flush": true, 00:08:23.407 "reset": true, 00:08:23.407 "nvme_admin": false, 00:08:23.407 "nvme_io": false, 00:08:23.407 "nvme_io_md": false, 00:08:23.407 "write_zeroes": true, 00:08:23.407 "zcopy": false, 00:08:23.407 "get_zone_info": false, 00:08:23.407 "zone_management": false, 00:08:23.407 "zone_append": false, 00:08:23.407 "compare": false, 00:08:23.407 "compare_and_write": false, 00:08:23.407 "abort": false, 00:08:23.407 "seek_hole": false, 00:08:23.407 "seek_data": false, 00:08:23.407 "copy": false, 00:08:23.407 "nvme_iov_md": false 00:08:23.407 }, 00:08:23.407 "memory_domains": [ 00:08:23.407 { 00:08:23.407 "dma_device_id": "system", 00:08:23.407 "dma_device_type": 1 00:08:23.407 }, 00:08:23.407 { 00:08:23.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.407 "dma_device_type": 2 00:08:23.407 }, 00:08:23.407 { 00:08:23.408 "dma_device_id": "system", 00:08:23.408 "dma_device_type": 1 00:08:23.408 }, 00:08:23.408 { 00:08:23.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.408 "dma_device_type": 2 00:08:23.408 }, 00:08:23.408 { 00:08:23.408 "dma_device_id": "system", 00:08:23.408 "dma_device_type": 1 00:08:23.408 }, 00:08:23.408 { 00:08:23.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.408 "dma_device_type": 2 00:08:23.408 } 00:08:23.408 ], 00:08:23.408 "driver_specific": { 00:08:23.408 "raid": { 00:08:23.408 "uuid": "00d49617-a864-4371-a8d8-ea3739eaecfc", 00:08:23.408 "strip_size_kb": 64, 00:08:23.408 "state": "online", 00:08:23.408 "raid_level": "raid0", 00:08:23.408 "superblock": false, 00:08:23.408 "num_base_bdevs": 3, 00:08:23.408 "num_base_bdevs_discovered": 3, 00:08:23.408 "num_base_bdevs_operational": 3, 00:08:23.408 "base_bdevs_list": [ 00:08:23.408 { 00:08:23.408 "name": "BaseBdev1", 00:08:23.408 "uuid": "ffde08bb-4759-47f9-b158-132959784c81", 00:08:23.408 "is_configured": true, 00:08:23.408 "data_offset": 0, 00:08:23.408 "data_size": 65536 00:08:23.408 }, 00:08:23.408 { 00:08:23.408 "name": "BaseBdev2", 00:08:23.408 "uuid": "a7fd6f60-010a-453f-9572-5fb5eca60690", 00:08:23.408 "is_configured": true, 00:08:23.408 "data_offset": 0, 00:08:23.408 "data_size": 65536 00:08:23.408 }, 00:08:23.408 { 00:08:23.408 "name": "BaseBdev3", 00:08:23.408 "uuid": "0dc428f0-dc9f-4ce3-8b0c-7e68df3d35b8", 00:08:23.408 "is_configured": true, 00:08:23.408 "data_offset": 0, 00:08:23.408 "data_size": 65536 00:08:23.408 } 00:08:23.408 ] 00:08:23.408 } 00:08:23.408 } 00:08:23.408 }' 00:08:23.408 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.408 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:23.408 BaseBdev2 00:08:23.408 BaseBdev3' 00:08:23.408 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.667 [2024-09-29 21:39:42.543845] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.667 [2024-09-29 21:39:42.543914] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.667 [2024-09-29 21:39:42.544003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:23.667 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.926 "name": "Existed_Raid", 00:08:23.926 "uuid": "00d49617-a864-4371-a8d8-ea3739eaecfc", 00:08:23.926 "strip_size_kb": 64, 00:08:23.926 "state": "offline", 00:08:23.926 "raid_level": "raid0", 00:08:23.926 "superblock": false, 00:08:23.926 "num_base_bdevs": 3, 00:08:23.926 "num_base_bdevs_discovered": 2, 00:08:23.926 "num_base_bdevs_operational": 2, 00:08:23.926 "base_bdevs_list": [ 00:08:23.926 { 00:08:23.926 "name": null, 00:08:23.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.926 "is_configured": false, 00:08:23.926 "data_offset": 0, 00:08:23.926 "data_size": 65536 00:08:23.926 }, 00:08:23.926 { 00:08:23.926 "name": "BaseBdev2", 00:08:23.926 "uuid": "a7fd6f60-010a-453f-9572-5fb5eca60690", 00:08:23.926 "is_configured": true, 00:08:23.926 "data_offset": 0, 00:08:23.926 "data_size": 65536 00:08:23.926 }, 00:08:23.926 { 00:08:23.926 "name": "BaseBdev3", 00:08:23.926 "uuid": "0dc428f0-dc9f-4ce3-8b0c-7e68df3d35b8", 00:08:23.926 "is_configured": true, 00:08:23.926 "data_offset": 0, 00:08:23.926 "data_size": 65536 00:08:23.926 } 00:08:23.926 ] 00:08:23.926 }' 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.926 21:39:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.185 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.185 [2024-09-29 21:39:43.104778] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.443 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.443 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.443 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.443 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.444 [2024-09-29 21:39:43.256307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:24.444 [2024-09-29 21:39:43.256392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.444 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.703 BaseBdev2 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.703 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.703 [ 00:08:24.703 { 00:08:24.703 "name": "BaseBdev2", 00:08:24.703 "aliases": [ 00:08:24.703 "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce" 00:08:24.703 ], 00:08:24.703 "product_name": "Malloc disk", 00:08:24.704 "block_size": 512, 00:08:24.704 "num_blocks": 65536, 00:08:24.704 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:24.704 "assigned_rate_limits": { 00:08:24.704 "rw_ios_per_sec": 0, 00:08:24.704 "rw_mbytes_per_sec": 0, 00:08:24.704 "r_mbytes_per_sec": 0, 00:08:24.704 "w_mbytes_per_sec": 0 00:08:24.704 }, 00:08:24.704 "claimed": false, 00:08:24.704 "zoned": false, 00:08:24.704 "supported_io_types": { 00:08:24.704 "read": true, 00:08:24.704 "write": true, 00:08:24.704 "unmap": true, 00:08:24.704 "flush": true, 00:08:24.704 "reset": true, 00:08:24.704 "nvme_admin": false, 00:08:24.704 "nvme_io": false, 00:08:24.704 "nvme_io_md": false, 00:08:24.704 "write_zeroes": true, 00:08:24.704 "zcopy": true, 00:08:24.704 "get_zone_info": false, 00:08:24.704 "zone_management": false, 00:08:24.704 "zone_append": false, 00:08:24.704 "compare": false, 00:08:24.704 "compare_and_write": false, 00:08:24.704 "abort": true, 00:08:24.704 "seek_hole": false, 00:08:24.704 "seek_data": false, 00:08:24.704 "copy": true, 00:08:24.704 "nvme_iov_md": false 00:08:24.704 }, 00:08:24.704 "memory_domains": [ 00:08:24.704 { 00:08:24.704 "dma_device_id": "system", 00:08:24.704 "dma_device_type": 1 00:08:24.704 }, 00:08:24.704 { 00:08:24.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.704 "dma_device_type": 2 00:08:24.704 } 00:08:24.704 ], 00:08:24.704 "driver_specific": {} 00:08:24.704 } 00:08:24.704 ] 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.704 BaseBdev3 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.704 [ 00:08:24.704 { 00:08:24.704 "name": "BaseBdev3", 00:08:24.704 "aliases": [ 00:08:24.704 "fd23095a-ac7d-456f-b919-22139ef13d4a" 00:08:24.704 ], 00:08:24.704 "product_name": "Malloc disk", 00:08:24.704 "block_size": 512, 00:08:24.704 "num_blocks": 65536, 00:08:24.704 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:24.704 "assigned_rate_limits": { 00:08:24.704 "rw_ios_per_sec": 0, 00:08:24.704 "rw_mbytes_per_sec": 0, 00:08:24.704 "r_mbytes_per_sec": 0, 00:08:24.704 "w_mbytes_per_sec": 0 00:08:24.704 }, 00:08:24.704 "claimed": false, 00:08:24.704 "zoned": false, 00:08:24.704 "supported_io_types": { 00:08:24.704 "read": true, 00:08:24.704 "write": true, 00:08:24.704 "unmap": true, 00:08:24.704 "flush": true, 00:08:24.704 "reset": true, 00:08:24.704 "nvme_admin": false, 00:08:24.704 "nvme_io": false, 00:08:24.704 "nvme_io_md": false, 00:08:24.704 "write_zeroes": true, 00:08:24.704 "zcopy": true, 00:08:24.704 "get_zone_info": false, 00:08:24.704 "zone_management": false, 00:08:24.704 "zone_append": false, 00:08:24.704 "compare": false, 00:08:24.704 "compare_and_write": false, 00:08:24.704 "abort": true, 00:08:24.704 "seek_hole": false, 00:08:24.704 "seek_data": false, 00:08:24.704 "copy": true, 00:08:24.704 "nvme_iov_md": false 00:08:24.704 }, 00:08:24.704 "memory_domains": [ 00:08:24.704 { 00:08:24.704 "dma_device_id": "system", 00:08:24.704 "dma_device_type": 1 00:08:24.704 }, 00:08:24.704 { 00:08:24.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.704 "dma_device_type": 2 00:08:24.704 } 00:08:24.704 ], 00:08:24.704 "driver_specific": {} 00:08:24.704 } 00:08:24.704 ] 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.704 [2024-09-29 21:39:43.586723] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.704 [2024-09-29 21:39:43.586777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.704 [2024-09-29 21:39:43.586798] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.704 [2024-09-29 21:39:43.588824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.704 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.704 "name": "Existed_Raid", 00:08:24.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.704 "strip_size_kb": 64, 00:08:24.704 "state": "configuring", 00:08:24.704 "raid_level": "raid0", 00:08:24.704 "superblock": false, 00:08:24.704 "num_base_bdevs": 3, 00:08:24.704 "num_base_bdevs_discovered": 2, 00:08:24.704 "num_base_bdevs_operational": 3, 00:08:24.704 "base_bdevs_list": [ 00:08:24.704 { 00:08:24.704 "name": "BaseBdev1", 00:08:24.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.704 "is_configured": false, 00:08:24.704 "data_offset": 0, 00:08:24.704 "data_size": 0 00:08:24.704 }, 00:08:24.704 { 00:08:24.704 "name": "BaseBdev2", 00:08:24.704 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:24.704 "is_configured": true, 00:08:24.704 "data_offset": 0, 00:08:24.704 "data_size": 65536 00:08:24.704 }, 00:08:24.704 { 00:08:24.704 "name": "BaseBdev3", 00:08:24.704 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:24.704 "is_configured": true, 00:08:24.704 "data_offset": 0, 00:08:24.704 "data_size": 65536 00:08:24.704 } 00:08:24.704 ] 00:08:24.704 }' 00:08:24.705 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.705 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.273 21:39:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:25.273 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.273 21:39:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.273 [2024-09-29 21:39:44.001962] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.273 "name": "Existed_Raid", 00:08:25.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.273 "strip_size_kb": 64, 00:08:25.273 "state": "configuring", 00:08:25.273 "raid_level": "raid0", 00:08:25.273 "superblock": false, 00:08:25.273 "num_base_bdevs": 3, 00:08:25.273 "num_base_bdevs_discovered": 1, 00:08:25.273 "num_base_bdevs_operational": 3, 00:08:25.273 "base_bdevs_list": [ 00:08:25.273 { 00:08:25.273 "name": "BaseBdev1", 00:08:25.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.273 "is_configured": false, 00:08:25.273 "data_offset": 0, 00:08:25.273 "data_size": 0 00:08:25.273 }, 00:08:25.273 { 00:08:25.273 "name": null, 00:08:25.273 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:25.273 "is_configured": false, 00:08:25.273 "data_offset": 0, 00:08:25.273 "data_size": 65536 00:08:25.273 }, 00:08:25.273 { 00:08:25.273 "name": "BaseBdev3", 00:08:25.273 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:25.273 "is_configured": true, 00:08:25.273 "data_offset": 0, 00:08:25.273 "data_size": 65536 00:08:25.273 } 00:08:25.273 ] 00:08:25.273 }' 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.273 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.532 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:25.532 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.532 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.532 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.532 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.790 [2024-09-29 21:39:44.559172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.790 BaseBdev1 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.790 [ 00:08:25.790 { 00:08:25.790 "name": "BaseBdev1", 00:08:25.790 "aliases": [ 00:08:25.790 "419dead6-fa74-40bd-9f29-834eb8093048" 00:08:25.790 ], 00:08:25.790 "product_name": "Malloc disk", 00:08:25.790 "block_size": 512, 00:08:25.790 "num_blocks": 65536, 00:08:25.790 "uuid": "419dead6-fa74-40bd-9f29-834eb8093048", 00:08:25.790 "assigned_rate_limits": { 00:08:25.790 "rw_ios_per_sec": 0, 00:08:25.790 "rw_mbytes_per_sec": 0, 00:08:25.790 "r_mbytes_per_sec": 0, 00:08:25.790 "w_mbytes_per_sec": 0 00:08:25.790 }, 00:08:25.790 "claimed": true, 00:08:25.790 "claim_type": "exclusive_write", 00:08:25.790 "zoned": false, 00:08:25.790 "supported_io_types": { 00:08:25.790 "read": true, 00:08:25.790 "write": true, 00:08:25.790 "unmap": true, 00:08:25.790 "flush": true, 00:08:25.790 "reset": true, 00:08:25.790 "nvme_admin": false, 00:08:25.790 "nvme_io": false, 00:08:25.790 "nvme_io_md": false, 00:08:25.790 "write_zeroes": true, 00:08:25.790 "zcopy": true, 00:08:25.790 "get_zone_info": false, 00:08:25.790 "zone_management": false, 00:08:25.790 "zone_append": false, 00:08:25.790 "compare": false, 00:08:25.790 "compare_and_write": false, 00:08:25.790 "abort": true, 00:08:25.790 "seek_hole": false, 00:08:25.790 "seek_data": false, 00:08:25.790 "copy": true, 00:08:25.790 "nvme_iov_md": false 00:08:25.790 }, 00:08:25.790 "memory_domains": [ 00:08:25.790 { 00:08:25.790 "dma_device_id": "system", 00:08:25.790 "dma_device_type": 1 00:08:25.790 }, 00:08:25.790 { 00:08:25.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.790 "dma_device_type": 2 00:08:25.790 } 00:08:25.790 ], 00:08:25.790 "driver_specific": {} 00:08:25.790 } 00:08:25.790 ] 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.790 "name": "Existed_Raid", 00:08:25.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.790 "strip_size_kb": 64, 00:08:25.790 "state": "configuring", 00:08:25.790 "raid_level": "raid0", 00:08:25.790 "superblock": false, 00:08:25.790 "num_base_bdevs": 3, 00:08:25.790 "num_base_bdevs_discovered": 2, 00:08:25.790 "num_base_bdevs_operational": 3, 00:08:25.790 "base_bdevs_list": [ 00:08:25.790 { 00:08:25.790 "name": "BaseBdev1", 00:08:25.790 "uuid": "419dead6-fa74-40bd-9f29-834eb8093048", 00:08:25.790 "is_configured": true, 00:08:25.790 "data_offset": 0, 00:08:25.790 "data_size": 65536 00:08:25.790 }, 00:08:25.790 { 00:08:25.790 "name": null, 00:08:25.790 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:25.790 "is_configured": false, 00:08:25.790 "data_offset": 0, 00:08:25.790 "data_size": 65536 00:08:25.790 }, 00:08:25.790 { 00:08:25.790 "name": "BaseBdev3", 00:08:25.790 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:25.790 "is_configured": true, 00:08:25.790 "data_offset": 0, 00:08:25.790 "data_size": 65536 00:08:25.790 } 00:08:25.790 ] 00:08:25.790 }' 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.790 21:39:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.049 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.049 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.049 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.049 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.308 [2024-09-29 21:39:45.078309] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.308 "name": "Existed_Raid", 00:08:26.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.308 "strip_size_kb": 64, 00:08:26.308 "state": "configuring", 00:08:26.308 "raid_level": "raid0", 00:08:26.308 "superblock": false, 00:08:26.308 "num_base_bdevs": 3, 00:08:26.308 "num_base_bdevs_discovered": 1, 00:08:26.308 "num_base_bdevs_operational": 3, 00:08:26.308 "base_bdevs_list": [ 00:08:26.308 { 00:08:26.308 "name": "BaseBdev1", 00:08:26.308 "uuid": "419dead6-fa74-40bd-9f29-834eb8093048", 00:08:26.308 "is_configured": true, 00:08:26.308 "data_offset": 0, 00:08:26.308 "data_size": 65536 00:08:26.308 }, 00:08:26.308 { 00:08:26.308 "name": null, 00:08:26.308 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:26.308 "is_configured": false, 00:08:26.308 "data_offset": 0, 00:08:26.308 "data_size": 65536 00:08:26.308 }, 00:08:26.308 { 00:08:26.308 "name": null, 00:08:26.308 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:26.308 "is_configured": false, 00:08:26.308 "data_offset": 0, 00:08:26.308 "data_size": 65536 00:08:26.308 } 00:08:26.308 ] 00:08:26.308 }' 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.308 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.567 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.567 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.567 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.567 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:26.567 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.567 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:26.567 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:26.567 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.568 [2024-09-29 21:39:45.529539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.568 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.826 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.826 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.826 "name": "Existed_Raid", 00:08:26.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.826 "strip_size_kb": 64, 00:08:26.826 "state": "configuring", 00:08:26.826 "raid_level": "raid0", 00:08:26.826 "superblock": false, 00:08:26.826 "num_base_bdevs": 3, 00:08:26.826 "num_base_bdevs_discovered": 2, 00:08:26.826 "num_base_bdevs_operational": 3, 00:08:26.826 "base_bdevs_list": [ 00:08:26.826 { 00:08:26.826 "name": "BaseBdev1", 00:08:26.826 "uuid": "419dead6-fa74-40bd-9f29-834eb8093048", 00:08:26.826 "is_configured": true, 00:08:26.826 "data_offset": 0, 00:08:26.826 "data_size": 65536 00:08:26.826 }, 00:08:26.826 { 00:08:26.826 "name": null, 00:08:26.826 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:26.826 "is_configured": false, 00:08:26.826 "data_offset": 0, 00:08:26.826 "data_size": 65536 00:08:26.826 }, 00:08:26.826 { 00:08:26.826 "name": "BaseBdev3", 00:08:26.826 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:26.826 "is_configured": true, 00:08:26.826 "data_offset": 0, 00:08:26.826 "data_size": 65536 00:08:26.826 } 00:08:26.826 ] 00:08:26.826 }' 00:08:26.826 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.826 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.085 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:27.085 21:39:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.085 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.085 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.085 21:39:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.085 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:27.085 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.085 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.085 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.085 [2024-09-29 21:39:46.016750] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.348 "name": "Existed_Raid", 00:08:27.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.348 "strip_size_kb": 64, 00:08:27.348 "state": "configuring", 00:08:27.348 "raid_level": "raid0", 00:08:27.348 "superblock": false, 00:08:27.348 "num_base_bdevs": 3, 00:08:27.348 "num_base_bdevs_discovered": 1, 00:08:27.348 "num_base_bdevs_operational": 3, 00:08:27.348 "base_bdevs_list": [ 00:08:27.348 { 00:08:27.348 "name": null, 00:08:27.348 "uuid": "419dead6-fa74-40bd-9f29-834eb8093048", 00:08:27.348 "is_configured": false, 00:08:27.348 "data_offset": 0, 00:08:27.348 "data_size": 65536 00:08:27.348 }, 00:08:27.348 { 00:08:27.348 "name": null, 00:08:27.348 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:27.348 "is_configured": false, 00:08:27.348 "data_offset": 0, 00:08:27.348 "data_size": 65536 00:08:27.348 }, 00:08:27.348 { 00:08:27.348 "name": "BaseBdev3", 00:08:27.348 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:27.348 "is_configured": true, 00:08:27.348 "data_offset": 0, 00:08:27.348 "data_size": 65536 00:08:27.348 } 00:08:27.348 ] 00:08:27.348 }' 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.348 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.613 [2024-09-29 21:39:46.533998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.613 "name": "Existed_Raid", 00:08:27.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.613 "strip_size_kb": 64, 00:08:27.613 "state": "configuring", 00:08:27.613 "raid_level": "raid0", 00:08:27.613 "superblock": false, 00:08:27.613 "num_base_bdevs": 3, 00:08:27.613 "num_base_bdevs_discovered": 2, 00:08:27.613 "num_base_bdevs_operational": 3, 00:08:27.613 "base_bdevs_list": [ 00:08:27.613 { 00:08:27.613 "name": null, 00:08:27.613 "uuid": "419dead6-fa74-40bd-9f29-834eb8093048", 00:08:27.613 "is_configured": false, 00:08:27.613 "data_offset": 0, 00:08:27.613 "data_size": 65536 00:08:27.613 }, 00:08:27.613 { 00:08:27.613 "name": "BaseBdev2", 00:08:27.613 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:27.613 "is_configured": true, 00:08:27.613 "data_offset": 0, 00:08:27.613 "data_size": 65536 00:08:27.613 }, 00:08:27.613 { 00:08:27.613 "name": "BaseBdev3", 00:08:27.613 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:27.613 "is_configured": true, 00:08:27.613 "data_offset": 0, 00:08:27.613 "data_size": 65536 00:08:27.613 } 00:08:27.613 ] 00:08:27.613 }' 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.613 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.189 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.189 21:39:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:28.189 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.189 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.189 21:39:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.189 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:28.189 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:28.189 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.189 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.189 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.189 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.189 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 419dead6-fa74-40bd-9f29-834eb8093048 00:08:28.189 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.189 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.189 [2024-09-29 21:39:47.090771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:28.189 [2024-09-29 21:39:47.090816] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:28.189 [2024-09-29 21:39:47.090826] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:28.189 [2024-09-29 21:39:47.091114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:28.190 [2024-09-29 21:39:47.091300] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:28.190 [2024-09-29 21:39:47.091314] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:28.190 [2024-09-29 21:39:47.091570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.190 NewBaseBdev 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.190 [ 00:08:28.190 { 00:08:28.190 "name": "NewBaseBdev", 00:08:28.190 "aliases": [ 00:08:28.190 "419dead6-fa74-40bd-9f29-834eb8093048" 00:08:28.190 ], 00:08:28.190 "product_name": "Malloc disk", 00:08:28.190 "block_size": 512, 00:08:28.190 "num_blocks": 65536, 00:08:28.190 "uuid": "419dead6-fa74-40bd-9f29-834eb8093048", 00:08:28.190 "assigned_rate_limits": { 00:08:28.190 "rw_ios_per_sec": 0, 00:08:28.190 "rw_mbytes_per_sec": 0, 00:08:28.190 "r_mbytes_per_sec": 0, 00:08:28.190 "w_mbytes_per_sec": 0 00:08:28.190 }, 00:08:28.190 "claimed": true, 00:08:28.190 "claim_type": "exclusive_write", 00:08:28.190 "zoned": false, 00:08:28.190 "supported_io_types": { 00:08:28.190 "read": true, 00:08:28.190 "write": true, 00:08:28.190 "unmap": true, 00:08:28.190 "flush": true, 00:08:28.190 "reset": true, 00:08:28.190 "nvme_admin": false, 00:08:28.190 "nvme_io": false, 00:08:28.190 "nvme_io_md": false, 00:08:28.190 "write_zeroes": true, 00:08:28.190 "zcopy": true, 00:08:28.190 "get_zone_info": false, 00:08:28.190 "zone_management": false, 00:08:28.190 "zone_append": false, 00:08:28.190 "compare": false, 00:08:28.190 "compare_and_write": false, 00:08:28.190 "abort": true, 00:08:28.190 "seek_hole": false, 00:08:28.190 "seek_data": false, 00:08:28.190 "copy": true, 00:08:28.190 "nvme_iov_md": false 00:08:28.190 }, 00:08:28.190 "memory_domains": [ 00:08:28.190 { 00:08:28.190 "dma_device_id": "system", 00:08:28.190 "dma_device_type": 1 00:08:28.190 }, 00:08:28.190 { 00:08:28.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.190 "dma_device_type": 2 00:08:28.190 } 00:08:28.190 ], 00:08:28.190 "driver_specific": {} 00:08:28.190 } 00:08:28.190 ] 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.190 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.449 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.449 "name": "Existed_Raid", 00:08:28.449 "uuid": "c18e90d7-3ee5-4ebd-9386-1ac84335450f", 00:08:28.449 "strip_size_kb": 64, 00:08:28.449 "state": "online", 00:08:28.449 "raid_level": "raid0", 00:08:28.449 "superblock": false, 00:08:28.449 "num_base_bdevs": 3, 00:08:28.449 "num_base_bdevs_discovered": 3, 00:08:28.449 "num_base_bdevs_operational": 3, 00:08:28.449 "base_bdevs_list": [ 00:08:28.449 { 00:08:28.449 "name": "NewBaseBdev", 00:08:28.449 "uuid": "419dead6-fa74-40bd-9f29-834eb8093048", 00:08:28.450 "is_configured": true, 00:08:28.450 "data_offset": 0, 00:08:28.450 "data_size": 65536 00:08:28.450 }, 00:08:28.450 { 00:08:28.450 "name": "BaseBdev2", 00:08:28.450 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:28.450 "is_configured": true, 00:08:28.450 "data_offset": 0, 00:08:28.450 "data_size": 65536 00:08:28.450 }, 00:08:28.450 { 00:08:28.450 "name": "BaseBdev3", 00:08:28.450 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:28.450 "is_configured": true, 00:08:28.450 "data_offset": 0, 00:08:28.450 "data_size": 65536 00:08:28.450 } 00:08:28.450 ] 00:08:28.450 }' 00:08:28.450 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.450 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.709 [2024-09-29 21:39:47.558271] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.709 "name": "Existed_Raid", 00:08:28.709 "aliases": [ 00:08:28.709 "c18e90d7-3ee5-4ebd-9386-1ac84335450f" 00:08:28.709 ], 00:08:28.709 "product_name": "Raid Volume", 00:08:28.709 "block_size": 512, 00:08:28.709 "num_blocks": 196608, 00:08:28.709 "uuid": "c18e90d7-3ee5-4ebd-9386-1ac84335450f", 00:08:28.709 "assigned_rate_limits": { 00:08:28.709 "rw_ios_per_sec": 0, 00:08:28.709 "rw_mbytes_per_sec": 0, 00:08:28.709 "r_mbytes_per_sec": 0, 00:08:28.709 "w_mbytes_per_sec": 0 00:08:28.709 }, 00:08:28.709 "claimed": false, 00:08:28.709 "zoned": false, 00:08:28.709 "supported_io_types": { 00:08:28.709 "read": true, 00:08:28.709 "write": true, 00:08:28.709 "unmap": true, 00:08:28.709 "flush": true, 00:08:28.709 "reset": true, 00:08:28.709 "nvme_admin": false, 00:08:28.709 "nvme_io": false, 00:08:28.709 "nvme_io_md": false, 00:08:28.709 "write_zeroes": true, 00:08:28.709 "zcopy": false, 00:08:28.709 "get_zone_info": false, 00:08:28.709 "zone_management": false, 00:08:28.709 "zone_append": false, 00:08:28.709 "compare": false, 00:08:28.709 "compare_and_write": false, 00:08:28.709 "abort": false, 00:08:28.709 "seek_hole": false, 00:08:28.709 "seek_data": false, 00:08:28.709 "copy": false, 00:08:28.709 "nvme_iov_md": false 00:08:28.709 }, 00:08:28.709 "memory_domains": [ 00:08:28.709 { 00:08:28.709 "dma_device_id": "system", 00:08:28.709 "dma_device_type": 1 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.709 "dma_device_type": 2 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "dma_device_id": "system", 00:08:28.709 "dma_device_type": 1 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.709 "dma_device_type": 2 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "dma_device_id": "system", 00:08:28.709 "dma_device_type": 1 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.709 "dma_device_type": 2 00:08:28.709 } 00:08:28.709 ], 00:08:28.709 "driver_specific": { 00:08:28.709 "raid": { 00:08:28.709 "uuid": "c18e90d7-3ee5-4ebd-9386-1ac84335450f", 00:08:28.709 "strip_size_kb": 64, 00:08:28.709 "state": "online", 00:08:28.709 "raid_level": "raid0", 00:08:28.709 "superblock": false, 00:08:28.709 "num_base_bdevs": 3, 00:08:28.709 "num_base_bdevs_discovered": 3, 00:08:28.709 "num_base_bdevs_operational": 3, 00:08:28.709 "base_bdevs_list": [ 00:08:28.709 { 00:08:28.709 "name": "NewBaseBdev", 00:08:28.709 "uuid": "419dead6-fa74-40bd-9f29-834eb8093048", 00:08:28.709 "is_configured": true, 00:08:28.709 "data_offset": 0, 00:08:28.709 "data_size": 65536 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "name": "BaseBdev2", 00:08:28.709 "uuid": "ecc7c26a-9d01-4725-a9c1-b19b5f0744ce", 00:08:28.709 "is_configured": true, 00:08:28.709 "data_offset": 0, 00:08:28.709 "data_size": 65536 00:08:28.709 }, 00:08:28.709 { 00:08:28.709 "name": "BaseBdev3", 00:08:28.709 "uuid": "fd23095a-ac7d-456f-b919-22139ef13d4a", 00:08:28.709 "is_configured": true, 00:08:28.709 "data_offset": 0, 00:08:28.709 "data_size": 65536 00:08:28.709 } 00:08:28.709 ] 00:08:28.709 } 00:08:28.709 } 00:08:28.709 }' 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:28.709 BaseBdev2 00:08:28.709 BaseBdev3' 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.709 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.970 [2024-09-29 21:39:47.797544] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.970 [2024-09-29 21:39:47.797574] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.970 [2024-09-29 21:39:47.797656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.970 [2024-09-29 21:39:47.797712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.970 [2024-09-29 21:39:47.797741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63881 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63881 ']' 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63881 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63881 00:08:28.970 killing process with pid 63881 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63881' 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63881 00:08:28.970 [2024-09-29 21:39:47.831572] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.970 21:39:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63881 00:08:29.230 [2024-09-29 21:39:48.141854] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:30.611 00:08:30.611 real 0m10.408s 00:08:30.611 user 0m16.142s 00:08:30.611 sys 0m1.915s 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.611 ************************************ 00:08:30.611 END TEST raid_state_function_test 00:08:30.611 ************************************ 00:08:30.611 21:39:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:30.611 21:39:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:30.611 21:39:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.611 21:39:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.611 ************************************ 00:08:30.611 START TEST raid_state_function_test_sb 00:08:30.611 ************************************ 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64497 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:30.611 Process raid pid: 64497 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64497' 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64497 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64497 ']' 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.611 21:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.871 [2024-09-29 21:39:49.624052] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:30.871 [2024-09-29 21:39:49.624181] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.871 [2024-09-29 21:39:49.787747] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.131 [2024-09-29 21:39:50.033737] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.390 [2024-09-29 21:39:50.262103] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.391 [2024-09-29 21:39:50.262144] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.650 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.650 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:31.650 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.650 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.651 [2024-09-29 21:39:50.454534] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.651 [2024-09-29 21:39:50.454594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.651 [2024-09-29 21:39:50.454604] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.651 [2024-09-29 21:39:50.454614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.651 [2024-09-29 21:39:50.454620] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:31.651 [2024-09-29 21:39:50.454630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.651 "name": "Existed_Raid", 00:08:31.651 "uuid": "b1d65f7e-eeb9-4658-9f83-bdc89dc4a1ae", 00:08:31.651 "strip_size_kb": 64, 00:08:31.651 "state": "configuring", 00:08:31.651 "raid_level": "raid0", 00:08:31.651 "superblock": true, 00:08:31.651 "num_base_bdevs": 3, 00:08:31.651 "num_base_bdevs_discovered": 0, 00:08:31.651 "num_base_bdevs_operational": 3, 00:08:31.651 "base_bdevs_list": [ 00:08:31.651 { 00:08:31.651 "name": "BaseBdev1", 00:08:31.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.651 "is_configured": false, 00:08:31.651 "data_offset": 0, 00:08:31.651 "data_size": 0 00:08:31.651 }, 00:08:31.651 { 00:08:31.651 "name": "BaseBdev2", 00:08:31.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.651 "is_configured": false, 00:08:31.651 "data_offset": 0, 00:08:31.651 "data_size": 0 00:08:31.651 }, 00:08:31.651 { 00:08:31.651 "name": "BaseBdev3", 00:08:31.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.651 "is_configured": false, 00:08:31.651 "data_offset": 0, 00:08:31.651 "data_size": 0 00:08:31.651 } 00:08:31.651 ] 00:08:31.651 }' 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.651 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.220 [2024-09-29 21:39:50.917654] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.220 [2024-09-29 21:39:50.917694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.220 [2024-09-29 21:39:50.929667] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.220 [2024-09-29 21:39:50.929710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.220 [2024-09-29 21:39:50.929718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.220 [2024-09-29 21:39:50.929727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.220 [2024-09-29 21:39:50.929733] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.220 [2024-09-29 21:39:50.929742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.220 21:39:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.220 [2024-09-29 21:39:51.016642] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.220 BaseBdev1 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.220 [ 00:08:32.220 { 00:08:32.220 "name": "BaseBdev1", 00:08:32.220 "aliases": [ 00:08:32.220 "377b1ece-65eb-4310-a421-1a363014a0ab" 00:08:32.220 ], 00:08:32.220 "product_name": "Malloc disk", 00:08:32.220 "block_size": 512, 00:08:32.220 "num_blocks": 65536, 00:08:32.220 "uuid": "377b1ece-65eb-4310-a421-1a363014a0ab", 00:08:32.220 "assigned_rate_limits": { 00:08:32.220 "rw_ios_per_sec": 0, 00:08:32.220 "rw_mbytes_per_sec": 0, 00:08:32.220 "r_mbytes_per_sec": 0, 00:08:32.220 "w_mbytes_per_sec": 0 00:08:32.220 }, 00:08:32.220 "claimed": true, 00:08:32.220 "claim_type": "exclusive_write", 00:08:32.220 "zoned": false, 00:08:32.220 "supported_io_types": { 00:08:32.220 "read": true, 00:08:32.220 "write": true, 00:08:32.220 "unmap": true, 00:08:32.220 "flush": true, 00:08:32.220 "reset": true, 00:08:32.220 "nvme_admin": false, 00:08:32.220 "nvme_io": false, 00:08:32.220 "nvme_io_md": false, 00:08:32.220 "write_zeroes": true, 00:08:32.220 "zcopy": true, 00:08:32.220 "get_zone_info": false, 00:08:32.220 "zone_management": false, 00:08:32.220 "zone_append": false, 00:08:32.220 "compare": false, 00:08:32.220 "compare_and_write": false, 00:08:32.220 "abort": true, 00:08:32.220 "seek_hole": false, 00:08:32.220 "seek_data": false, 00:08:32.220 "copy": true, 00:08:32.220 "nvme_iov_md": false 00:08:32.220 }, 00:08:32.220 "memory_domains": [ 00:08:32.220 { 00:08:32.220 "dma_device_id": "system", 00:08:32.220 "dma_device_type": 1 00:08:32.220 }, 00:08:32.220 { 00:08:32.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.220 "dma_device_type": 2 00:08:32.220 } 00:08:32.220 ], 00:08:32.220 "driver_specific": {} 00:08:32.220 } 00:08:32.220 ] 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.220 "name": "Existed_Raid", 00:08:32.220 "uuid": "fa261699-61fb-45aa-9cb7-6ded2d62a227", 00:08:32.220 "strip_size_kb": 64, 00:08:32.220 "state": "configuring", 00:08:32.220 "raid_level": "raid0", 00:08:32.220 "superblock": true, 00:08:32.220 "num_base_bdevs": 3, 00:08:32.220 "num_base_bdevs_discovered": 1, 00:08:32.220 "num_base_bdevs_operational": 3, 00:08:32.220 "base_bdevs_list": [ 00:08:32.220 { 00:08:32.220 "name": "BaseBdev1", 00:08:32.220 "uuid": "377b1ece-65eb-4310-a421-1a363014a0ab", 00:08:32.220 "is_configured": true, 00:08:32.220 "data_offset": 2048, 00:08:32.220 "data_size": 63488 00:08:32.220 }, 00:08:32.220 { 00:08:32.220 "name": "BaseBdev2", 00:08:32.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.220 "is_configured": false, 00:08:32.220 "data_offset": 0, 00:08:32.220 "data_size": 0 00:08:32.220 }, 00:08:32.220 { 00:08:32.220 "name": "BaseBdev3", 00:08:32.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.220 "is_configured": false, 00:08:32.220 "data_offset": 0, 00:08:32.220 "data_size": 0 00:08:32.220 } 00:08:32.220 ] 00:08:32.220 }' 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.220 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.789 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.789 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.789 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.789 [2024-09-29 21:39:51.491946] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.789 [2024-09-29 21:39:51.492053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:32.789 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.789 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.789 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.789 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.789 [2024-09-29 21:39:51.503981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.789 [2024-09-29 21:39:51.506091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.789 [2024-09-29 21:39:51.506165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.789 [2024-09-29 21:39:51.506192] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.789 [2024-09-29 21:39:51.506214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.789 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.790 "name": "Existed_Raid", 00:08:32.790 "uuid": "78642448-38d3-400e-a205-af03d17564a3", 00:08:32.790 "strip_size_kb": 64, 00:08:32.790 "state": "configuring", 00:08:32.790 "raid_level": "raid0", 00:08:32.790 "superblock": true, 00:08:32.790 "num_base_bdevs": 3, 00:08:32.790 "num_base_bdevs_discovered": 1, 00:08:32.790 "num_base_bdevs_operational": 3, 00:08:32.790 "base_bdevs_list": [ 00:08:32.790 { 00:08:32.790 "name": "BaseBdev1", 00:08:32.790 "uuid": "377b1ece-65eb-4310-a421-1a363014a0ab", 00:08:32.790 "is_configured": true, 00:08:32.790 "data_offset": 2048, 00:08:32.790 "data_size": 63488 00:08:32.790 }, 00:08:32.790 { 00:08:32.790 "name": "BaseBdev2", 00:08:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.790 "is_configured": false, 00:08:32.790 "data_offset": 0, 00:08:32.790 "data_size": 0 00:08:32.790 }, 00:08:32.790 { 00:08:32.790 "name": "BaseBdev3", 00:08:32.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.790 "is_configured": false, 00:08:32.790 "data_offset": 0, 00:08:32.790 "data_size": 0 00:08:32.790 } 00:08:32.790 ] 00:08:32.790 }' 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.790 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.049 [2024-09-29 21:39:51.990923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.049 BaseBdev2 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.049 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.050 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.050 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.050 21:39:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.050 [ 00:08:33.050 { 00:08:33.050 "name": "BaseBdev2", 00:08:33.050 "aliases": [ 00:08:33.050 "79881407-9d16-4a70-8ae6-8ef7e4629f0f" 00:08:33.050 ], 00:08:33.050 "product_name": "Malloc disk", 00:08:33.050 "block_size": 512, 00:08:33.050 "num_blocks": 65536, 00:08:33.050 "uuid": "79881407-9d16-4a70-8ae6-8ef7e4629f0f", 00:08:33.050 "assigned_rate_limits": { 00:08:33.050 "rw_ios_per_sec": 0, 00:08:33.050 "rw_mbytes_per_sec": 0, 00:08:33.050 "r_mbytes_per_sec": 0, 00:08:33.050 "w_mbytes_per_sec": 0 00:08:33.050 }, 00:08:33.050 "claimed": true, 00:08:33.050 "claim_type": "exclusive_write", 00:08:33.050 "zoned": false, 00:08:33.050 "supported_io_types": { 00:08:33.050 "read": true, 00:08:33.050 "write": true, 00:08:33.050 "unmap": true, 00:08:33.050 "flush": true, 00:08:33.050 "reset": true, 00:08:33.050 "nvme_admin": false, 00:08:33.050 "nvme_io": false, 00:08:33.050 "nvme_io_md": false, 00:08:33.050 "write_zeroes": true, 00:08:33.050 "zcopy": true, 00:08:33.050 "get_zone_info": false, 00:08:33.050 "zone_management": false, 00:08:33.050 "zone_append": false, 00:08:33.050 "compare": false, 00:08:33.050 "compare_and_write": false, 00:08:33.050 "abort": true, 00:08:33.050 "seek_hole": false, 00:08:33.050 "seek_data": false, 00:08:33.050 "copy": true, 00:08:33.050 "nvme_iov_md": false 00:08:33.050 }, 00:08:33.050 "memory_domains": [ 00:08:33.050 { 00:08:33.050 "dma_device_id": "system", 00:08:33.050 "dma_device_type": 1 00:08:33.050 }, 00:08:33.050 { 00:08:33.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.050 "dma_device_type": 2 00:08:33.050 } 00:08:33.050 ], 00:08:33.050 "driver_specific": {} 00:08:33.050 } 00:08:33.050 ] 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.050 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.310 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.310 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.310 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.310 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.310 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.310 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.310 "name": "Existed_Raid", 00:08:33.310 "uuid": "78642448-38d3-400e-a205-af03d17564a3", 00:08:33.310 "strip_size_kb": 64, 00:08:33.310 "state": "configuring", 00:08:33.310 "raid_level": "raid0", 00:08:33.310 "superblock": true, 00:08:33.310 "num_base_bdevs": 3, 00:08:33.310 "num_base_bdevs_discovered": 2, 00:08:33.310 "num_base_bdevs_operational": 3, 00:08:33.310 "base_bdevs_list": [ 00:08:33.310 { 00:08:33.310 "name": "BaseBdev1", 00:08:33.310 "uuid": "377b1ece-65eb-4310-a421-1a363014a0ab", 00:08:33.310 "is_configured": true, 00:08:33.310 "data_offset": 2048, 00:08:33.310 "data_size": 63488 00:08:33.310 }, 00:08:33.310 { 00:08:33.310 "name": "BaseBdev2", 00:08:33.310 "uuid": "79881407-9d16-4a70-8ae6-8ef7e4629f0f", 00:08:33.310 "is_configured": true, 00:08:33.310 "data_offset": 2048, 00:08:33.310 "data_size": 63488 00:08:33.310 }, 00:08:33.310 { 00:08:33.310 "name": "BaseBdev3", 00:08:33.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.310 "is_configured": false, 00:08:33.310 "data_offset": 0, 00:08:33.310 "data_size": 0 00:08:33.310 } 00:08:33.310 ] 00:08:33.310 }' 00:08:33.310 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.310 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.570 [2024-09-29 21:39:52.541601] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.570 [2024-09-29 21:39:52.541964] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:33.570 [2024-09-29 21:39:52.542028] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:33.570 [2024-09-29 21:39:52.542362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:33.570 BaseBdev3 00:08:33.570 [2024-09-29 21:39:52.542568] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:33.570 [2024-09-29 21:39:52.542580] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:33.570 [2024-09-29 21:39:52.542730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.570 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.830 [ 00:08:33.830 { 00:08:33.830 "name": "BaseBdev3", 00:08:33.830 "aliases": [ 00:08:33.830 "bc627c9b-681f-4ceb-88f3-0e18c8526d59" 00:08:33.830 ], 00:08:33.830 "product_name": "Malloc disk", 00:08:33.830 "block_size": 512, 00:08:33.830 "num_blocks": 65536, 00:08:33.830 "uuid": "bc627c9b-681f-4ceb-88f3-0e18c8526d59", 00:08:33.830 "assigned_rate_limits": { 00:08:33.830 "rw_ios_per_sec": 0, 00:08:33.830 "rw_mbytes_per_sec": 0, 00:08:33.830 "r_mbytes_per_sec": 0, 00:08:33.830 "w_mbytes_per_sec": 0 00:08:33.830 }, 00:08:33.830 "claimed": true, 00:08:33.830 "claim_type": "exclusive_write", 00:08:33.830 "zoned": false, 00:08:33.830 "supported_io_types": { 00:08:33.830 "read": true, 00:08:33.830 "write": true, 00:08:33.830 "unmap": true, 00:08:33.830 "flush": true, 00:08:33.830 "reset": true, 00:08:33.830 "nvme_admin": false, 00:08:33.830 "nvme_io": false, 00:08:33.830 "nvme_io_md": false, 00:08:33.830 "write_zeroes": true, 00:08:33.830 "zcopy": true, 00:08:33.830 "get_zone_info": false, 00:08:33.830 "zone_management": false, 00:08:33.830 "zone_append": false, 00:08:33.830 "compare": false, 00:08:33.830 "compare_and_write": false, 00:08:33.830 "abort": true, 00:08:33.830 "seek_hole": false, 00:08:33.830 "seek_data": false, 00:08:33.830 "copy": true, 00:08:33.830 "nvme_iov_md": false 00:08:33.830 }, 00:08:33.830 "memory_domains": [ 00:08:33.830 { 00:08:33.830 "dma_device_id": "system", 00:08:33.830 "dma_device_type": 1 00:08:33.830 }, 00:08:33.830 { 00:08:33.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.830 "dma_device_type": 2 00:08:33.830 } 00:08:33.830 ], 00:08:33.830 "driver_specific": {} 00:08:33.830 } 00:08:33.830 ] 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.830 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.830 "name": "Existed_Raid", 00:08:33.830 "uuid": "78642448-38d3-400e-a205-af03d17564a3", 00:08:33.830 "strip_size_kb": 64, 00:08:33.830 "state": "online", 00:08:33.830 "raid_level": "raid0", 00:08:33.830 "superblock": true, 00:08:33.830 "num_base_bdevs": 3, 00:08:33.830 "num_base_bdevs_discovered": 3, 00:08:33.830 "num_base_bdevs_operational": 3, 00:08:33.830 "base_bdevs_list": [ 00:08:33.830 { 00:08:33.830 "name": "BaseBdev1", 00:08:33.830 "uuid": "377b1ece-65eb-4310-a421-1a363014a0ab", 00:08:33.830 "is_configured": true, 00:08:33.830 "data_offset": 2048, 00:08:33.830 "data_size": 63488 00:08:33.830 }, 00:08:33.830 { 00:08:33.831 "name": "BaseBdev2", 00:08:33.831 "uuid": "79881407-9d16-4a70-8ae6-8ef7e4629f0f", 00:08:33.831 "is_configured": true, 00:08:33.831 "data_offset": 2048, 00:08:33.831 "data_size": 63488 00:08:33.831 }, 00:08:33.831 { 00:08:33.831 "name": "BaseBdev3", 00:08:33.831 "uuid": "bc627c9b-681f-4ceb-88f3-0e18c8526d59", 00:08:33.831 "is_configured": true, 00:08:33.831 "data_offset": 2048, 00:08:33.831 "data_size": 63488 00:08:33.831 } 00:08:33.831 ] 00:08:33.831 }' 00:08:33.831 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.831 21:39:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.091 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:34.091 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:34.091 21:39:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.091 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.091 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.091 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.091 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:34.091 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.091 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.091 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.091 [2024-09-29 21:39:53.013205] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.091 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.091 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.091 "name": "Existed_Raid", 00:08:34.091 "aliases": [ 00:08:34.091 "78642448-38d3-400e-a205-af03d17564a3" 00:08:34.091 ], 00:08:34.091 "product_name": "Raid Volume", 00:08:34.091 "block_size": 512, 00:08:34.091 "num_blocks": 190464, 00:08:34.091 "uuid": "78642448-38d3-400e-a205-af03d17564a3", 00:08:34.091 "assigned_rate_limits": { 00:08:34.091 "rw_ios_per_sec": 0, 00:08:34.091 "rw_mbytes_per_sec": 0, 00:08:34.091 "r_mbytes_per_sec": 0, 00:08:34.091 "w_mbytes_per_sec": 0 00:08:34.091 }, 00:08:34.091 "claimed": false, 00:08:34.091 "zoned": false, 00:08:34.091 "supported_io_types": { 00:08:34.091 "read": true, 00:08:34.091 "write": true, 00:08:34.091 "unmap": true, 00:08:34.091 "flush": true, 00:08:34.091 "reset": true, 00:08:34.091 "nvme_admin": false, 00:08:34.091 "nvme_io": false, 00:08:34.091 "nvme_io_md": false, 00:08:34.091 "write_zeroes": true, 00:08:34.091 "zcopy": false, 00:08:34.091 "get_zone_info": false, 00:08:34.091 "zone_management": false, 00:08:34.091 "zone_append": false, 00:08:34.091 "compare": false, 00:08:34.091 "compare_and_write": false, 00:08:34.091 "abort": false, 00:08:34.091 "seek_hole": false, 00:08:34.091 "seek_data": false, 00:08:34.091 "copy": false, 00:08:34.091 "nvme_iov_md": false 00:08:34.091 }, 00:08:34.091 "memory_domains": [ 00:08:34.091 { 00:08:34.091 "dma_device_id": "system", 00:08:34.091 "dma_device_type": 1 00:08:34.092 }, 00:08:34.092 { 00:08:34.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.092 "dma_device_type": 2 00:08:34.092 }, 00:08:34.092 { 00:08:34.092 "dma_device_id": "system", 00:08:34.092 "dma_device_type": 1 00:08:34.092 }, 00:08:34.092 { 00:08:34.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.092 "dma_device_type": 2 00:08:34.092 }, 00:08:34.092 { 00:08:34.092 "dma_device_id": "system", 00:08:34.092 "dma_device_type": 1 00:08:34.092 }, 00:08:34.092 { 00:08:34.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.092 "dma_device_type": 2 00:08:34.092 } 00:08:34.092 ], 00:08:34.092 "driver_specific": { 00:08:34.092 "raid": { 00:08:34.092 "uuid": "78642448-38d3-400e-a205-af03d17564a3", 00:08:34.092 "strip_size_kb": 64, 00:08:34.092 "state": "online", 00:08:34.092 "raid_level": "raid0", 00:08:34.092 "superblock": true, 00:08:34.092 "num_base_bdevs": 3, 00:08:34.092 "num_base_bdevs_discovered": 3, 00:08:34.092 "num_base_bdevs_operational": 3, 00:08:34.092 "base_bdevs_list": [ 00:08:34.092 { 00:08:34.092 "name": "BaseBdev1", 00:08:34.092 "uuid": "377b1ece-65eb-4310-a421-1a363014a0ab", 00:08:34.092 "is_configured": true, 00:08:34.092 "data_offset": 2048, 00:08:34.092 "data_size": 63488 00:08:34.092 }, 00:08:34.092 { 00:08:34.092 "name": "BaseBdev2", 00:08:34.092 "uuid": "79881407-9d16-4a70-8ae6-8ef7e4629f0f", 00:08:34.092 "is_configured": true, 00:08:34.092 "data_offset": 2048, 00:08:34.092 "data_size": 63488 00:08:34.092 }, 00:08:34.092 { 00:08:34.092 "name": "BaseBdev3", 00:08:34.092 "uuid": "bc627c9b-681f-4ceb-88f3-0e18c8526d59", 00:08:34.092 "is_configured": true, 00:08:34.092 "data_offset": 2048, 00:08:34.092 "data_size": 63488 00:08:34.092 } 00:08:34.092 ] 00:08:34.092 } 00:08:34.092 } 00:08:34.092 }' 00:08:34.092 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:34.352 BaseBdev2 00:08:34.352 BaseBdev3' 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.352 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.353 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.353 [2024-09-29 21:39:53.280460] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.353 [2024-09-29 21:39:53.280530] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.353 [2024-09-29 21:39:53.280627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.612 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.613 "name": "Existed_Raid", 00:08:34.613 "uuid": "78642448-38d3-400e-a205-af03d17564a3", 00:08:34.613 "strip_size_kb": 64, 00:08:34.613 "state": "offline", 00:08:34.613 "raid_level": "raid0", 00:08:34.613 "superblock": true, 00:08:34.613 "num_base_bdevs": 3, 00:08:34.613 "num_base_bdevs_discovered": 2, 00:08:34.613 "num_base_bdevs_operational": 2, 00:08:34.613 "base_bdevs_list": [ 00:08:34.613 { 00:08:34.613 "name": null, 00:08:34.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.613 "is_configured": false, 00:08:34.613 "data_offset": 0, 00:08:34.613 "data_size": 63488 00:08:34.613 }, 00:08:34.613 { 00:08:34.613 "name": "BaseBdev2", 00:08:34.613 "uuid": "79881407-9d16-4a70-8ae6-8ef7e4629f0f", 00:08:34.613 "is_configured": true, 00:08:34.613 "data_offset": 2048, 00:08:34.613 "data_size": 63488 00:08:34.613 }, 00:08:34.613 { 00:08:34.613 "name": "BaseBdev3", 00:08:34.613 "uuid": "bc627c9b-681f-4ceb-88f3-0e18c8526d59", 00:08:34.613 "is_configured": true, 00:08:34.613 "data_offset": 2048, 00:08:34.613 "data_size": 63488 00:08:34.613 } 00:08:34.613 ] 00:08:34.613 }' 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.613 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.872 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:34.872 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.872 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.872 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.872 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.872 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:34.872 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.133 [2024-09-29 21:39:53.888958] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.133 21:39:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.133 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.133 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.133 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.133 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:35.133 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.133 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.133 [2024-09-29 21:39:54.050035] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:35.133 [2024-09-29 21:39:54.050161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:35.393 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.393 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.393 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.393 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.394 BaseBdev2 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.394 [ 00:08:35.394 { 00:08:35.394 "name": "BaseBdev2", 00:08:35.394 "aliases": [ 00:08:35.394 "7f95ce35-ed08-484c-9e96-784684df335f" 00:08:35.394 ], 00:08:35.394 "product_name": "Malloc disk", 00:08:35.394 "block_size": 512, 00:08:35.394 "num_blocks": 65536, 00:08:35.394 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:35.394 "assigned_rate_limits": { 00:08:35.394 "rw_ios_per_sec": 0, 00:08:35.394 "rw_mbytes_per_sec": 0, 00:08:35.394 "r_mbytes_per_sec": 0, 00:08:35.394 "w_mbytes_per_sec": 0 00:08:35.394 }, 00:08:35.394 "claimed": false, 00:08:35.394 "zoned": false, 00:08:35.394 "supported_io_types": { 00:08:35.394 "read": true, 00:08:35.394 "write": true, 00:08:35.394 "unmap": true, 00:08:35.394 "flush": true, 00:08:35.394 "reset": true, 00:08:35.394 "nvme_admin": false, 00:08:35.394 "nvme_io": false, 00:08:35.394 "nvme_io_md": false, 00:08:35.394 "write_zeroes": true, 00:08:35.394 "zcopy": true, 00:08:35.394 "get_zone_info": false, 00:08:35.394 "zone_management": false, 00:08:35.394 "zone_append": false, 00:08:35.394 "compare": false, 00:08:35.394 "compare_and_write": false, 00:08:35.394 "abort": true, 00:08:35.394 "seek_hole": false, 00:08:35.394 "seek_data": false, 00:08:35.394 "copy": true, 00:08:35.394 "nvme_iov_md": false 00:08:35.394 }, 00:08:35.394 "memory_domains": [ 00:08:35.394 { 00:08:35.394 "dma_device_id": "system", 00:08:35.394 "dma_device_type": 1 00:08:35.394 }, 00:08:35.394 { 00:08:35.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.394 "dma_device_type": 2 00:08:35.394 } 00:08:35.394 ], 00:08:35.394 "driver_specific": {} 00:08:35.394 } 00:08:35.394 ] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.394 BaseBdev3 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.394 [ 00:08:35.394 { 00:08:35.394 "name": "BaseBdev3", 00:08:35.394 "aliases": [ 00:08:35.394 "b6386a4e-cc78-4610-9090-cacad3d200e1" 00:08:35.394 ], 00:08:35.394 "product_name": "Malloc disk", 00:08:35.394 "block_size": 512, 00:08:35.394 "num_blocks": 65536, 00:08:35.394 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:35.394 "assigned_rate_limits": { 00:08:35.394 "rw_ios_per_sec": 0, 00:08:35.394 "rw_mbytes_per_sec": 0, 00:08:35.394 "r_mbytes_per_sec": 0, 00:08:35.394 "w_mbytes_per_sec": 0 00:08:35.394 }, 00:08:35.394 "claimed": false, 00:08:35.394 "zoned": false, 00:08:35.394 "supported_io_types": { 00:08:35.394 "read": true, 00:08:35.394 "write": true, 00:08:35.394 "unmap": true, 00:08:35.394 "flush": true, 00:08:35.394 "reset": true, 00:08:35.394 "nvme_admin": false, 00:08:35.394 "nvme_io": false, 00:08:35.394 "nvme_io_md": false, 00:08:35.394 "write_zeroes": true, 00:08:35.394 "zcopy": true, 00:08:35.394 "get_zone_info": false, 00:08:35.394 "zone_management": false, 00:08:35.394 "zone_append": false, 00:08:35.394 "compare": false, 00:08:35.394 "compare_and_write": false, 00:08:35.394 "abort": true, 00:08:35.394 "seek_hole": false, 00:08:35.394 "seek_data": false, 00:08:35.394 "copy": true, 00:08:35.394 "nvme_iov_md": false 00:08:35.394 }, 00:08:35.394 "memory_domains": [ 00:08:35.394 { 00:08:35.394 "dma_device_id": "system", 00:08:35.394 "dma_device_type": 1 00:08:35.394 }, 00:08:35.394 { 00:08:35.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.394 "dma_device_type": 2 00:08:35.394 } 00:08:35.394 ], 00:08:35.394 "driver_specific": {} 00:08:35.394 } 00:08:35.394 ] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.394 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.394 [2024-09-29 21:39:54.372859] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.394 [2024-09-29 21:39:54.372977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.394 [2024-09-29 21:39:54.373022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.394 [2024-09-29 21:39:54.375138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.654 "name": "Existed_Raid", 00:08:35.654 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:35.654 "strip_size_kb": 64, 00:08:35.654 "state": "configuring", 00:08:35.654 "raid_level": "raid0", 00:08:35.654 "superblock": true, 00:08:35.654 "num_base_bdevs": 3, 00:08:35.654 "num_base_bdevs_discovered": 2, 00:08:35.654 "num_base_bdevs_operational": 3, 00:08:35.654 "base_bdevs_list": [ 00:08:35.654 { 00:08:35.654 "name": "BaseBdev1", 00:08:35.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.654 "is_configured": false, 00:08:35.654 "data_offset": 0, 00:08:35.654 "data_size": 0 00:08:35.654 }, 00:08:35.654 { 00:08:35.654 "name": "BaseBdev2", 00:08:35.654 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:35.654 "is_configured": true, 00:08:35.654 "data_offset": 2048, 00:08:35.654 "data_size": 63488 00:08:35.654 }, 00:08:35.654 { 00:08:35.654 "name": "BaseBdev3", 00:08:35.654 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:35.654 "is_configured": true, 00:08:35.654 "data_offset": 2048, 00:08:35.654 "data_size": 63488 00:08:35.654 } 00:08:35.654 ] 00:08:35.654 }' 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.654 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.915 [2024-09-29 21:39:54.824101] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.915 "name": "Existed_Raid", 00:08:35.915 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:35.915 "strip_size_kb": 64, 00:08:35.915 "state": "configuring", 00:08:35.915 "raid_level": "raid0", 00:08:35.915 "superblock": true, 00:08:35.915 "num_base_bdevs": 3, 00:08:35.915 "num_base_bdevs_discovered": 1, 00:08:35.915 "num_base_bdevs_operational": 3, 00:08:35.915 "base_bdevs_list": [ 00:08:35.915 { 00:08:35.915 "name": "BaseBdev1", 00:08:35.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.915 "is_configured": false, 00:08:35.915 "data_offset": 0, 00:08:35.915 "data_size": 0 00:08:35.915 }, 00:08:35.915 { 00:08:35.915 "name": null, 00:08:35.915 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:35.915 "is_configured": false, 00:08:35.915 "data_offset": 0, 00:08:35.915 "data_size": 63488 00:08:35.915 }, 00:08:35.915 { 00:08:35.915 "name": "BaseBdev3", 00:08:35.915 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:35.915 "is_configured": true, 00:08:35.915 "data_offset": 2048, 00:08:35.915 "data_size": 63488 00:08:35.915 } 00:08:35.915 ] 00:08:35.915 }' 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.915 21:39:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.485 [2024-09-29 21:39:55.361101] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.485 BaseBdev1 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.485 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.485 [ 00:08:36.485 { 00:08:36.485 "name": "BaseBdev1", 00:08:36.485 "aliases": [ 00:08:36.485 "7619af78-a170-4492-bd8a-0ee68258739c" 00:08:36.485 ], 00:08:36.485 "product_name": "Malloc disk", 00:08:36.485 "block_size": 512, 00:08:36.485 "num_blocks": 65536, 00:08:36.485 "uuid": "7619af78-a170-4492-bd8a-0ee68258739c", 00:08:36.485 "assigned_rate_limits": { 00:08:36.485 "rw_ios_per_sec": 0, 00:08:36.485 "rw_mbytes_per_sec": 0, 00:08:36.485 "r_mbytes_per_sec": 0, 00:08:36.485 "w_mbytes_per_sec": 0 00:08:36.485 }, 00:08:36.485 "claimed": true, 00:08:36.485 "claim_type": "exclusive_write", 00:08:36.485 "zoned": false, 00:08:36.485 "supported_io_types": { 00:08:36.485 "read": true, 00:08:36.485 "write": true, 00:08:36.485 "unmap": true, 00:08:36.485 "flush": true, 00:08:36.485 "reset": true, 00:08:36.485 "nvme_admin": false, 00:08:36.485 "nvme_io": false, 00:08:36.485 "nvme_io_md": false, 00:08:36.485 "write_zeroes": true, 00:08:36.485 "zcopy": true, 00:08:36.485 "get_zone_info": false, 00:08:36.485 "zone_management": false, 00:08:36.485 "zone_append": false, 00:08:36.485 "compare": false, 00:08:36.485 "compare_and_write": false, 00:08:36.485 "abort": true, 00:08:36.485 "seek_hole": false, 00:08:36.485 "seek_data": false, 00:08:36.485 "copy": true, 00:08:36.485 "nvme_iov_md": false 00:08:36.485 }, 00:08:36.485 "memory_domains": [ 00:08:36.485 { 00:08:36.485 "dma_device_id": "system", 00:08:36.485 "dma_device_type": 1 00:08:36.485 }, 00:08:36.486 { 00:08:36.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.486 "dma_device_type": 2 00:08:36.486 } 00:08:36.486 ], 00:08:36.486 "driver_specific": {} 00:08:36.486 } 00:08:36.486 ] 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.486 "name": "Existed_Raid", 00:08:36.486 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:36.486 "strip_size_kb": 64, 00:08:36.486 "state": "configuring", 00:08:36.486 "raid_level": "raid0", 00:08:36.486 "superblock": true, 00:08:36.486 "num_base_bdevs": 3, 00:08:36.486 "num_base_bdevs_discovered": 2, 00:08:36.486 "num_base_bdevs_operational": 3, 00:08:36.486 "base_bdevs_list": [ 00:08:36.486 { 00:08:36.486 "name": "BaseBdev1", 00:08:36.486 "uuid": "7619af78-a170-4492-bd8a-0ee68258739c", 00:08:36.486 "is_configured": true, 00:08:36.486 "data_offset": 2048, 00:08:36.486 "data_size": 63488 00:08:36.486 }, 00:08:36.486 { 00:08:36.486 "name": null, 00:08:36.486 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:36.486 "is_configured": false, 00:08:36.486 "data_offset": 0, 00:08:36.486 "data_size": 63488 00:08:36.486 }, 00:08:36.486 { 00:08:36.486 "name": "BaseBdev3", 00:08:36.486 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:36.486 "is_configured": true, 00:08:36.486 "data_offset": 2048, 00:08:36.486 "data_size": 63488 00:08:36.486 } 00:08:36.486 ] 00:08:36.486 }' 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.486 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.055 [2024-09-29 21:39:55.820330] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.055 "name": "Existed_Raid", 00:08:37.055 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:37.055 "strip_size_kb": 64, 00:08:37.055 "state": "configuring", 00:08:37.055 "raid_level": "raid0", 00:08:37.055 "superblock": true, 00:08:37.055 "num_base_bdevs": 3, 00:08:37.055 "num_base_bdevs_discovered": 1, 00:08:37.055 "num_base_bdevs_operational": 3, 00:08:37.055 "base_bdevs_list": [ 00:08:37.055 { 00:08:37.055 "name": "BaseBdev1", 00:08:37.055 "uuid": "7619af78-a170-4492-bd8a-0ee68258739c", 00:08:37.055 "is_configured": true, 00:08:37.055 "data_offset": 2048, 00:08:37.055 "data_size": 63488 00:08:37.055 }, 00:08:37.055 { 00:08:37.055 "name": null, 00:08:37.055 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:37.055 "is_configured": false, 00:08:37.055 "data_offset": 0, 00:08:37.055 "data_size": 63488 00:08:37.055 }, 00:08:37.055 { 00:08:37.055 "name": null, 00:08:37.055 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:37.055 "is_configured": false, 00:08:37.055 "data_offset": 0, 00:08:37.055 "data_size": 63488 00:08:37.055 } 00:08:37.055 ] 00:08:37.055 }' 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.055 21:39:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.315 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.315 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.315 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.315 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:37.315 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.575 [2024-09-29 21:39:56.307527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.575 "name": "Existed_Raid", 00:08:37.575 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:37.575 "strip_size_kb": 64, 00:08:37.575 "state": "configuring", 00:08:37.575 "raid_level": "raid0", 00:08:37.575 "superblock": true, 00:08:37.575 "num_base_bdevs": 3, 00:08:37.575 "num_base_bdevs_discovered": 2, 00:08:37.575 "num_base_bdevs_operational": 3, 00:08:37.575 "base_bdevs_list": [ 00:08:37.575 { 00:08:37.575 "name": "BaseBdev1", 00:08:37.575 "uuid": "7619af78-a170-4492-bd8a-0ee68258739c", 00:08:37.575 "is_configured": true, 00:08:37.575 "data_offset": 2048, 00:08:37.575 "data_size": 63488 00:08:37.575 }, 00:08:37.575 { 00:08:37.575 "name": null, 00:08:37.575 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:37.575 "is_configured": false, 00:08:37.575 "data_offset": 0, 00:08:37.575 "data_size": 63488 00:08:37.575 }, 00:08:37.575 { 00:08:37.575 "name": "BaseBdev3", 00:08:37.575 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:37.575 "is_configured": true, 00:08:37.575 "data_offset": 2048, 00:08:37.575 "data_size": 63488 00:08:37.575 } 00:08:37.575 ] 00:08:37.575 }' 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.575 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:37.835 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.835 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.835 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:37.835 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.835 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.835 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.835 [2024-09-29 21:39:56.758850] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.095 "name": "Existed_Raid", 00:08:38.095 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:38.095 "strip_size_kb": 64, 00:08:38.095 "state": "configuring", 00:08:38.095 "raid_level": "raid0", 00:08:38.095 "superblock": true, 00:08:38.095 "num_base_bdevs": 3, 00:08:38.095 "num_base_bdevs_discovered": 1, 00:08:38.095 "num_base_bdevs_operational": 3, 00:08:38.095 "base_bdevs_list": [ 00:08:38.095 { 00:08:38.095 "name": null, 00:08:38.095 "uuid": "7619af78-a170-4492-bd8a-0ee68258739c", 00:08:38.095 "is_configured": false, 00:08:38.095 "data_offset": 0, 00:08:38.095 "data_size": 63488 00:08:38.095 }, 00:08:38.095 { 00:08:38.095 "name": null, 00:08:38.095 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:38.095 "is_configured": false, 00:08:38.095 "data_offset": 0, 00:08:38.095 "data_size": 63488 00:08:38.095 }, 00:08:38.095 { 00:08:38.095 "name": "BaseBdev3", 00:08:38.095 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:38.095 "is_configured": true, 00:08:38.095 "data_offset": 2048, 00:08:38.095 "data_size": 63488 00:08:38.095 } 00:08:38.095 ] 00:08:38.095 }' 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.095 21:39:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.355 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.355 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.355 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.355 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.355 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.355 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:38.355 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:38.355 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.355 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.355 [2024-09-29 21:39:57.338683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.615 "name": "Existed_Raid", 00:08:38.615 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:38.615 "strip_size_kb": 64, 00:08:38.615 "state": "configuring", 00:08:38.615 "raid_level": "raid0", 00:08:38.615 "superblock": true, 00:08:38.615 "num_base_bdevs": 3, 00:08:38.615 "num_base_bdevs_discovered": 2, 00:08:38.615 "num_base_bdevs_operational": 3, 00:08:38.615 "base_bdevs_list": [ 00:08:38.615 { 00:08:38.615 "name": null, 00:08:38.615 "uuid": "7619af78-a170-4492-bd8a-0ee68258739c", 00:08:38.615 "is_configured": false, 00:08:38.615 "data_offset": 0, 00:08:38.615 "data_size": 63488 00:08:38.615 }, 00:08:38.615 { 00:08:38.615 "name": "BaseBdev2", 00:08:38.615 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:38.615 "is_configured": true, 00:08:38.615 "data_offset": 2048, 00:08:38.615 "data_size": 63488 00:08:38.615 }, 00:08:38.615 { 00:08:38.615 "name": "BaseBdev3", 00:08:38.615 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:38.615 "is_configured": true, 00:08:38.615 "data_offset": 2048, 00:08:38.615 "data_size": 63488 00:08:38.615 } 00:08:38.615 ] 00:08:38.615 }' 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.615 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.874 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7619af78-a170-4492-bd8a-0ee68258739c 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.134 [2024-09-29 21:39:57.911011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:39.134 [2024-09-29 21:39:57.911355] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:39.134 [2024-09-29 21:39:57.911414] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.134 [2024-09-29 21:39:57.911730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:39.134 NewBaseBdev 00:08:39.134 [2024-09-29 21:39:57.911918] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:39.134 [2024-09-29 21:39:57.911928] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:39.134 [2024-09-29 21:39:57.912095] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.134 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.134 [ 00:08:39.134 { 00:08:39.134 "name": "NewBaseBdev", 00:08:39.134 "aliases": [ 00:08:39.134 "7619af78-a170-4492-bd8a-0ee68258739c" 00:08:39.134 ], 00:08:39.134 "product_name": "Malloc disk", 00:08:39.134 "block_size": 512, 00:08:39.134 "num_blocks": 65536, 00:08:39.134 "uuid": "7619af78-a170-4492-bd8a-0ee68258739c", 00:08:39.135 "assigned_rate_limits": { 00:08:39.135 "rw_ios_per_sec": 0, 00:08:39.135 "rw_mbytes_per_sec": 0, 00:08:39.135 "r_mbytes_per_sec": 0, 00:08:39.135 "w_mbytes_per_sec": 0 00:08:39.135 }, 00:08:39.135 "claimed": true, 00:08:39.135 "claim_type": "exclusive_write", 00:08:39.135 "zoned": false, 00:08:39.135 "supported_io_types": { 00:08:39.135 "read": true, 00:08:39.135 "write": true, 00:08:39.135 "unmap": true, 00:08:39.135 "flush": true, 00:08:39.135 "reset": true, 00:08:39.135 "nvme_admin": false, 00:08:39.135 "nvme_io": false, 00:08:39.135 "nvme_io_md": false, 00:08:39.135 "write_zeroes": true, 00:08:39.135 "zcopy": true, 00:08:39.135 "get_zone_info": false, 00:08:39.135 "zone_management": false, 00:08:39.135 "zone_append": false, 00:08:39.135 "compare": false, 00:08:39.135 "compare_and_write": false, 00:08:39.135 "abort": true, 00:08:39.135 "seek_hole": false, 00:08:39.135 "seek_data": false, 00:08:39.135 "copy": true, 00:08:39.135 "nvme_iov_md": false 00:08:39.135 }, 00:08:39.135 "memory_domains": [ 00:08:39.135 { 00:08:39.135 "dma_device_id": "system", 00:08:39.135 "dma_device_type": 1 00:08:39.135 }, 00:08:39.135 { 00:08:39.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.135 "dma_device_type": 2 00:08:39.135 } 00:08:39.135 ], 00:08:39.135 "driver_specific": {} 00:08:39.135 } 00:08:39.135 ] 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.135 "name": "Existed_Raid", 00:08:39.135 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:39.135 "strip_size_kb": 64, 00:08:39.135 "state": "online", 00:08:39.135 "raid_level": "raid0", 00:08:39.135 "superblock": true, 00:08:39.135 "num_base_bdevs": 3, 00:08:39.135 "num_base_bdevs_discovered": 3, 00:08:39.135 "num_base_bdevs_operational": 3, 00:08:39.135 "base_bdevs_list": [ 00:08:39.135 { 00:08:39.135 "name": "NewBaseBdev", 00:08:39.135 "uuid": "7619af78-a170-4492-bd8a-0ee68258739c", 00:08:39.135 "is_configured": true, 00:08:39.135 "data_offset": 2048, 00:08:39.135 "data_size": 63488 00:08:39.135 }, 00:08:39.135 { 00:08:39.135 "name": "BaseBdev2", 00:08:39.135 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:39.135 "is_configured": true, 00:08:39.135 "data_offset": 2048, 00:08:39.135 "data_size": 63488 00:08:39.135 }, 00:08:39.135 { 00:08:39.135 "name": "BaseBdev3", 00:08:39.135 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:39.135 "is_configured": true, 00:08:39.135 "data_offset": 2048, 00:08:39.135 "data_size": 63488 00:08:39.135 } 00:08:39.135 ] 00:08:39.135 }' 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.135 21:39:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.705 [2024-09-29 21:39:58.418444] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.705 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.705 "name": "Existed_Raid", 00:08:39.705 "aliases": [ 00:08:39.705 "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f" 00:08:39.705 ], 00:08:39.705 "product_name": "Raid Volume", 00:08:39.705 "block_size": 512, 00:08:39.705 "num_blocks": 190464, 00:08:39.705 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:39.705 "assigned_rate_limits": { 00:08:39.705 "rw_ios_per_sec": 0, 00:08:39.705 "rw_mbytes_per_sec": 0, 00:08:39.705 "r_mbytes_per_sec": 0, 00:08:39.705 "w_mbytes_per_sec": 0 00:08:39.705 }, 00:08:39.705 "claimed": false, 00:08:39.705 "zoned": false, 00:08:39.705 "supported_io_types": { 00:08:39.705 "read": true, 00:08:39.705 "write": true, 00:08:39.705 "unmap": true, 00:08:39.705 "flush": true, 00:08:39.705 "reset": true, 00:08:39.705 "nvme_admin": false, 00:08:39.705 "nvme_io": false, 00:08:39.705 "nvme_io_md": false, 00:08:39.705 "write_zeroes": true, 00:08:39.705 "zcopy": false, 00:08:39.705 "get_zone_info": false, 00:08:39.705 "zone_management": false, 00:08:39.705 "zone_append": false, 00:08:39.705 "compare": false, 00:08:39.705 "compare_and_write": false, 00:08:39.705 "abort": false, 00:08:39.705 "seek_hole": false, 00:08:39.705 "seek_data": false, 00:08:39.705 "copy": false, 00:08:39.705 "nvme_iov_md": false 00:08:39.705 }, 00:08:39.705 "memory_domains": [ 00:08:39.705 { 00:08:39.705 "dma_device_id": "system", 00:08:39.705 "dma_device_type": 1 00:08:39.705 }, 00:08:39.705 { 00:08:39.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.705 "dma_device_type": 2 00:08:39.705 }, 00:08:39.705 { 00:08:39.705 "dma_device_id": "system", 00:08:39.705 "dma_device_type": 1 00:08:39.705 }, 00:08:39.705 { 00:08:39.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.705 "dma_device_type": 2 00:08:39.705 }, 00:08:39.705 { 00:08:39.705 "dma_device_id": "system", 00:08:39.705 "dma_device_type": 1 00:08:39.705 }, 00:08:39.705 { 00:08:39.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.705 "dma_device_type": 2 00:08:39.705 } 00:08:39.705 ], 00:08:39.705 "driver_specific": { 00:08:39.705 "raid": { 00:08:39.705 "uuid": "58a91c40-c3f5-43f3-86b7-9a4b66e2ce8f", 00:08:39.705 "strip_size_kb": 64, 00:08:39.705 "state": "online", 00:08:39.705 "raid_level": "raid0", 00:08:39.705 "superblock": true, 00:08:39.705 "num_base_bdevs": 3, 00:08:39.705 "num_base_bdevs_discovered": 3, 00:08:39.705 "num_base_bdevs_operational": 3, 00:08:39.705 "base_bdevs_list": [ 00:08:39.705 { 00:08:39.705 "name": "NewBaseBdev", 00:08:39.705 "uuid": "7619af78-a170-4492-bd8a-0ee68258739c", 00:08:39.705 "is_configured": true, 00:08:39.705 "data_offset": 2048, 00:08:39.705 "data_size": 63488 00:08:39.705 }, 00:08:39.705 { 00:08:39.705 "name": "BaseBdev2", 00:08:39.705 "uuid": "7f95ce35-ed08-484c-9e96-784684df335f", 00:08:39.705 "is_configured": true, 00:08:39.705 "data_offset": 2048, 00:08:39.705 "data_size": 63488 00:08:39.705 }, 00:08:39.705 { 00:08:39.705 "name": "BaseBdev3", 00:08:39.705 "uuid": "b6386a4e-cc78-4610-9090-cacad3d200e1", 00:08:39.705 "is_configured": true, 00:08:39.705 "data_offset": 2048, 00:08:39.705 "data_size": 63488 00:08:39.705 } 00:08:39.706 ] 00:08:39.706 } 00:08:39.706 } 00:08:39.706 }' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:39.706 BaseBdev2 00:08:39.706 BaseBdev3' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.706 [2024-09-29 21:39:58.669704] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.706 [2024-09-29 21:39:58.669771] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.706 [2024-09-29 21:39:58.669882] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.706 [2024-09-29 21:39:58.669979] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.706 [2024-09-29 21:39:58.670017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64497 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64497 ']' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64497 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.706 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64497 00:08:39.966 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:39.966 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:39.966 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64497' 00:08:39.966 killing process with pid 64497 00:08:39.966 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64497 00:08:39.966 [2024-09-29 21:39:58.710764] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.966 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64497 00:08:40.226 [2024-09-29 21:39:59.026927] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.606 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:41.606 00:08:41.606 real 0m10.824s 00:08:41.606 user 0m16.873s 00:08:41.606 sys 0m2.005s 00:08:41.606 ************************************ 00:08:41.606 END TEST raid_state_function_test_sb 00:08:41.606 ************************************ 00:08:41.606 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.606 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.606 21:40:00 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:41.606 21:40:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:41.606 21:40:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.606 21:40:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.606 ************************************ 00:08:41.606 START TEST raid_superblock_test 00:08:41.606 ************************************ 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65123 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65123 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65123 ']' 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.606 21:40:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.606 [2024-09-29 21:40:00.532752] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:41.606 [2024-09-29 21:40:00.532971] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65123 ] 00:08:41.899 [2024-09-29 21:40:00.701645] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.183 [2024-09-29 21:40:00.947067] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.442 [2024-09-29 21:40:01.178278] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.442 [2024-09-29 21:40:01.178367] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.442 malloc1 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.442 [2024-09-29 21:40:01.412717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.442 [2024-09-29 21:40:01.412860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.442 [2024-09-29 21:40:01.412905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:42.442 [2024-09-29 21:40:01.412949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.442 [2024-09-29 21:40:01.415358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.442 [2024-09-29 21:40:01.415429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.442 pt1 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.442 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.702 malloc2 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.702 [2024-09-29 21:40:01.483162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.702 [2024-09-29 21:40:01.483221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.702 [2024-09-29 21:40:01.483246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:42.702 [2024-09-29 21:40:01.483255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.702 [2024-09-29 21:40:01.485711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.702 pt2 00:08:42.702 [2024-09-29 21:40:01.485834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.702 malloc3 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.702 [2024-09-29 21:40:01.542858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:42.702 [2024-09-29 21:40:01.542962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.702 [2024-09-29 21:40:01.543000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:42.702 [2024-09-29 21:40:01.543027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.702 [2024-09-29 21:40:01.545466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.702 [2024-09-29 21:40:01.545553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:42.702 pt3 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.702 [2024-09-29 21:40:01.554915] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.702 [2024-09-29 21:40:01.557035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.702 [2024-09-29 21:40:01.557157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:42.702 [2024-09-29 21:40:01.557351] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:42.702 [2024-09-29 21:40:01.557400] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:42.702 [2024-09-29 21:40:01.557651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:42.702 [2024-09-29 21:40:01.557864] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:42.702 [2024-09-29 21:40:01.557913] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:42.702 [2024-09-29 21:40:01.558111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.702 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.702 "name": "raid_bdev1", 00:08:42.702 "uuid": "929411dc-5947-412e-b35d-598640b0e6df", 00:08:42.702 "strip_size_kb": 64, 00:08:42.702 "state": "online", 00:08:42.702 "raid_level": "raid0", 00:08:42.702 "superblock": true, 00:08:42.702 "num_base_bdevs": 3, 00:08:42.702 "num_base_bdevs_discovered": 3, 00:08:42.702 "num_base_bdevs_operational": 3, 00:08:42.702 "base_bdevs_list": [ 00:08:42.702 { 00:08:42.702 "name": "pt1", 00:08:42.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.702 "is_configured": true, 00:08:42.702 "data_offset": 2048, 00:08:42.702 "data_size": 63488 00:08:42.702 }, 00:08:42.702 { 00:08:42.702 "name": "pt2", 00:08:42.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.702 "is_configured": true, 00:08:42.702 "data_offset": 2048, 00:08:42.702 "data_size": 63488 00:08:42.703 }, 00:08:42.703 { 00:08:42.703 "name": "pt3", 00:08:42.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.703 "is_configured": true, 00:08:42.703 "data_offset": 2048, 00:08:42.703 "data_size": 63488 00:08:42.703 } 00:08:42.703 ] 00:08:42.703 }' 00:08:42.703 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.703 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.272 21:40:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.272 [2024-09-29 21:40:01.998410] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:43.272 "name": "raid_bdev1", 00:08:43.272 "aliases": [ 00:08:43.272 "929411dc-5947-412e-b35d-598640b0e6df" 00:08:43.272 ], 00:08:43.272 "product_name": "Raid Volume", 00:08:43.272 "block_size": 512, 00:08:43.272 "num_blocks": 190464, 00:08:43.272 "uuid": "929411dc-5947-412e-b35d-598640b0e6df", 00:08:43.272 "assigned_rate_limits": { 00:08:43.272 "rw_ios_per_sec": 0, 00:08:43.272 "rw_mbytes_per_sec": 0, 00:08:43.272 "r_mbytes_per_sec": 0, 00:08:43.272 "w_mbytes_per_sec": 0 00:08:43.272 }, 00:08:43.272 "claimed": false, 00:08:43.272 "zoned": false, 00:08:43.272 "supported_io_types": { 00:08:43.272 "read": true, 00:08:43.272 "write": true, 00:08:43.272 "unmap": true, 00:08:43.272 "flush": true, 00:08:43.272 "reset": true, 00:08:43.272 "nvme_admin": false, 00:08:43.272 "nvme_io": false, 00:08:43.272 "nvme_io_md": false, 00:08:43.272 "write_zeroes": true, 00:08:43.272 "zcopy": false, 00:08:43.272 "get_zone_info": false, 00:08:43.272 "zone_management": false, 00:08:43.272 "zone_append": false, 00:08:43.272 "compare": false, 00:08:43.272 "compare_and_write": false, 00:08:43.272 "abort": false, 00:08:43.272 "seek_hole": false, 00:08:43.272 "seek_data": false, 00:08:43.272 "copy": false, 00:08:43.272 "nvme_iov_md": false 00:08:43.272 }, 00:08:43.272 "memory_domains": [ 00:08:43.272 { 00:08:43.272 "dma_device_id": "system", 00:08:43.272 "dma_device_type": 1 00:08:43.272 }, 00:08:43.272 { 00:08:43.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.272 "dma_device_type": 2 00:08:43.272 }, 00:08:43.272 { 00:08:43.272 "dma_device_id": "system", 00:08:43.272 "dma_device_type": 1 00:08:43.272 }, 00:08:43.272 { 00:08:43.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.272 "dma_device_type": 2 00:08:43.272 }, 00:08:43.272 { 00:08:43.272 "dma_device_id": "system", 00:08:43.272 "dma_device_type": 1 00:08:43.272 }, 00:08:43.272 { 00:08:43.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.272 "dma_device_type": 2 00:08:43.272 } 00:08:43.272 ], 00:08:43.272 "driver_specific": { 00:08:43.272 "raid": { 00:08:43.272 "uuid": "929411dc-5947-412e-b35d-598640b0e6df", 00:08:43.272 "strip_size_kb": 64, 00:08:43.272 "state": "online", 00:08:43.272 "raid_level": "raid0", 00:08:43.272 "superblock": true, 00:08:43.272 "num_base_bdevs": 3, 00:08:43.272 "num_base_bdevs_discovered": 3, 00:08:43.272 "num_base_bdevs_operational": 3, 00:08:43.272 "base_bdevs_list": [ 00:08:43.272 { 00:08:43.272 "name": "pt1", 00:08:43.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.272 "is_configured": true, 00:08:43.272 "data_offset": 2048, 00:08:43.272 "data_size": 63488 00:08:43.272 }, 00:08:43.272 { 00:08:43.272 "name": "pt2", 00:08:43.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.272 "is_configured": true, 00:08:43.272 "data_offset": 2048, 00:08:43.272 "data_size": 63488 00:08:43.272 }, 00:08:43.272 { 00:08:43.272 "name": "pt3", 00:08:43.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.272 "is_configured": true, 00:08:43.272 "data_offset": 2048, 00:08:43.272 "data_size": 63488 00:08:43.272 } 00:08:43.272 ] 00:08:43.272 } 00:08:43.272 } 00:08:43.272 }' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:43.272 pt2 00:08:43.272 pt3' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.272 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.533 [2024-09-29 21:40:02.273869] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=929411dc-5947-412e-b35d-598640b0e6df 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 929411dc-5947-412e-b35d-598640b0e6df ']' 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.533 [2024-09-29 21:40:02.321528] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.533 [2024-09-29 21:40:02.321601] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.533 [2024-09-29 21:40:02.321691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.533 [2024-09-29 21:40:02.321788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.533 [2024-09-29 21:40:02.321830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.533 [2024-09-29 21:40:02.469305] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:43.533 [2024-09-29 21:40:02.471438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:43.533 [2024-09-29 21:40:02.471532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:43.533 [2024-09-29 21:40:02.471601] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:43.533 [2024-09-29 21:40:02.471697] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:43.533 [2024-09-29 21:40:02.471753] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:43.533 [2024-09-29 21:40:02.471804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.533 [2024-09-29 21:40:02.471837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:43.533 request: 00:08:43.533 { 00:08:43.533 "name": "raid_bdev1", 00:08:43.533 "raid_level": "raid0", 00:08:43.533 "base_bdevs": [ 00:08:43.533 "malloc1", 00:08:43.533 "malloc2", 00:08:43.533 "malloc3" 00:08:43.533 ], 00:08:43.533 "strip_size_kb": 64, 00:08:43.533 "superblock": false, 00:08:43.533 "method": "bdev_raid_create", 00:08:43.533 "req_id": 1 00:08:43.533 } 00:08:43.533 Got JSON-RPC error response 00:08:43.533 response: 00:08:43.533 { 00:08:43.533 "code": -17, 00:08:43.533 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:43.533 } 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.533 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.793 [2024-09-29 21:40:02.537166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:43.793 [2024-09-29 21:40:02.537279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.793 [2024-09-29 21:40:02.537315] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:43.793 [2024-09-29 21:40:02.537344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.793 [2024-09-29 21:40:02.539780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.793 [2024-09-29 21:40:02.539863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:43.793 [2024-09-29 21:40:02.539953] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:43.793 [2024-09-29 21:40:02.540020] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:43.793 pt1 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.793 "name": "raid_bdev1", 00:08:43.793 "uuid": "929411dc-5947-412e-b35d-598640b0e6df", 00:08:43.793 "strip_size_kb": 64, 00:08:43.793 "state": "configuring", 00:08:43.793 "raid_level": "raid0", 00:08:43.793 "superblock": true, 00:08:43.793 "num_base_bdevs": 3, 00:08:43.793 "num_base_bdevs_discovered": 1, 00:08:43.793 "num_base_bdevs_operational": 3, 00:08:43.793 "base_bdevs_list": [ 00:08:43.793 { 00:08:43.793 "name": "pt1", 00:08:43.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:43.793 "is_configured": true, 00:08:43.793 "data_offset": 2048, 00:08:43.793 "data_size": 63488 00:08:43.793 }, 00:08:43.793 { 00:08:43.793 "name": null, 00:08:43.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.793 "is_configured": false, 00:08:43.793 "data_offset": 2048, 00:08:43.793 "data_size": 63488 00:08:43.793 }, 00:08:43.793 { 00:08:43.793 "name": null, 00:08:43.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.793 "is_configured": false, 00:08:43.793 "data_offset": 2048, 00:08:43.793 "data_size": 63488 00:08:43.793 } 00:08:43.793 ] 00:08:43.793 }' 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.793 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.053 [2024-09-29 21:40:02.948448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.053 [2024-09-29 21:40:02.948551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.053 [2024-09-29 21:40:02.948592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:44.053 [2024-09-29 21:40:02.948621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.053 [2024-09-29 21:40:02.949095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.053 [2024-09-29 21:40:02.949152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.053 [2024-09-29 21:40:02.949254] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.053 [2024-09-29 21:40:02.949303] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.053 pt2 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.053 [2024-09-29 21:40:02.960461] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.053 21:40:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.053 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.053 "name": "raid_bdev1", 00:08:44.053 "uuid": "929411dc-5947-412e-b35d-598640b0e6df", 00:08:44.053 "strip_size_kb": 64, 00:08:44.053 "state": "configuring", 00:08:44.053 "raid_level": "raid0", 00:08:44.053 "superblock": true, 00:08:44.053 "num_base_bdevs": 3, 00:08:44.053 "num_base_bdevs_discovered": 1, 00:08:44.053 "num_base_bdevs_operational": 3, 00:08:44.053 "base_bdevs_list": [ 00:08:44.053 { 00:08:44.053 "name": "pt1", 00:08:44.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.053 "is_configured": true, 00:08:44.053 "data_offset": 2048, 00:08:44.053 "data_size": 63488 00:08:44.053 }, 00:08:44.053 { 00:08:44.053 "name": null, 00:08:44.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.053 "is_configured": false, 00:08:44.053 "data_offset": 0, 00:08:44.053 "data_size": 63488 00:08:44.053 }, 00:08:44.053 { 00:08:44.053 "name": null, 00:08:44.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.053 "is_configured": false, 00:08:44.053 "data_offset": 2048, 00:08:44.053 "data_size": 63488 00:08:44.053 } 00:08:44.053 ] 00:08:44.053 }' 00:08:44.053 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.053 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.622 [2024-09-29 21:40:03.415682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:44.622 [2024-09-29 21:40:03.415784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.622 [2024-09-29 21:40:03.415818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:44.622 [2024-09-29 21:40:03.415847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.622 [2024-09-29 21:40:03.416332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.622 [2024-09-29 21:40:03.416396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:44.622 [2024-09-29 21:40:03.416507] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:44.622 [2024-09-29 21:40:03.416578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.622 pt2 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.622 [2024-09-29 21:40:03.427681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:44.622 [2024-09-29 21:40:03.427763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.622 [2024-09-29 21:40:03.427790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:44.622 [2024-09-29 21:40:03.427818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.622 [2024-09-29 21:40:03.428233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.622 [2024-09-29 21:40:03.428300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:44.622 [2024-09-29 21:40:03.428384] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:44.622 [2024-09-29 21:40:03.428432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:44.622 [2024-09-29 21:40:03.428571] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:44.622 [2024-09-29 21:40:03.428611] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:44.622 [2024-09-29 21:40:03.428897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:44.622 [2024-09-29 21:40:03.429091] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:44.622 [2024-09-29 21:40:03.429131] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:44.622 [2024-09-29 21:40:03.429300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.622 pt3 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.622 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.622 "name": "raid_bdev1", 00:08:44.622 "uuid": "929411dc-5947-412e-b35d-598640b0e6df", 00:08:44.622 "strip_size_kb": 64, 00:08:44.622 "state": "online", 00:08:44.623 "raid_level": "raid0", 00:08:44.623 "superblock": true, 00:08:44.623 "num_base_bdevs": 3, 00:08:44.623 "num_base_bdevs_discovered": 3, 00:08:44.623 "num_base_bdevs_operational": 3, 00:08:44.623 "base_bdevs_list": [ 00:08:44.623 { 00:08:44.623 "name": "pt1", 00:08:44.623 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:44.623 "is_configured": true, 00:08:44.623 "data_offset": 2048, 00:08:44.623 "data_size": 63488 00:08:44.623 }, 00:08:44.623 { 00:08:44.623 "name": "pt2", 00:08:44.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.623 "is_configured": true, 00:08:44.623 "data_offset": 2048, 00:08:44.623 "data_size": 63488 00:08:44.623 }, 00:08:44.623 { 00:08:44.623 "name": "pt3", 00:08:44.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.623 "is_configured": true, 00:08:44.623 "data_offset": 2048, 00:08:44.623 "data_size": 63488 00:08:44.623 } 00:08:44.623 ] 00:08:44.623 }' 00:08:44.623 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.623 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.191 [2024-09-29 21:40:03.887207] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.191 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.191 "name": "raid_bdev1", 00:08:45.191 "aliases": [ 00:08:45.191 "929411dc-5947-412e-b35d-598640b0e6df" 00:08:45.191 ], 00:08:45.191 "product_name": "Raid Volume", 00:08:45.191 "block_size": 512, 00:08:45.191 "num_blocks": 190464, 00:08:45.191 "uuid": "929411dc-5947-412e-b35d-598640b0e6df", 00:08:45.191 "assigned_rate_limits": { 00:08:45.191 "rw_ios_per_sec": 0, 00:08:45.191 "rw_mbytes_per_sec": 0, 00:08:45.191 "r_mbytes_per_sec": 0, 00:08:45.191 "w_mbytes_per_sec": 0 00:08:45.191 }, 00:08:45.191 "claimed": false, 00:08:45.191 "zoned": false, 00:08:45.191 "supported_io_types": { 00:08:45.191 "read": true, 00:08:45.191 "write": true, 00:08:45.191 "unmap": true, 00:08:45.192 "flush": true, 00:08:45.192 "reset": true, 00:08:45.192 "nvme_admin": false, 00:08:45.192 "nvme_io": false, 00:08:45.192 "nvme_io_md": false, 00:08:45.192 "write_zeroes": true, 00:08:45.192 "zcopy": false, 00:08:45.192 "get_zone_info": false, 00:08:45.192 "zone_management": false, 00:08:45.192 "zone_append": false, 00:08:45.192 "compare": false, 00:08:45.192 "compare_and_write": false, 00:08:45.192 "abort": false, 00:08:45.192 "seek_hole": false, 00:08:45.192 "seek_data": false, 00:08:45.192 "copy": false, 00:08:45.192 "nvme_iov_md": false 00:08:45.192 }, 00:08:45.192 "memory_domains": [ 00:08:45.192 { 00:08:45.192 "dma_device_id": "system", 00:08:45.192 "dma_device_type": 1 00:08:45.192 }, 00:08:45.192 { 00:08:45.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.192 "dma_device_type": 2 00:08:45.192 }, 00:08:45.192 { 00:08:45.192 "dma_device_id": "system", 00:08:45.192 "dma_device_type": 1 00:08:45.192 }, 00:08:45.192 { 00:08:45.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.192 "dma_device_type": 2 00:08:45.192 }, 00:08:45.192 { 00:08:45.192 "dma_device_id": "system", 00:08:45.192 "dma_device_type": 1 00:08:45.192 }, 00:08:45.192 { 00:08:45.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.192 "dma_device_type": 2 00:08:45.192 } 00:08:45.192 ], 00:08:45.192 "driver_specific": { 00:08:45.192 "raid": { 00:08:45.192 "uuid": "929411dc-5947-412e-b35d-598640b0e6df", 00:08:45.192 "strip_size_kb": 64, 00:08:45.192 "state": "online", 00:08:45.192 "raid_level": "raid0", 00:08:45.192 "superblock": true, 00:08:45.192 "num_base_bdevs": 3, 00:08:45.192 "num_base_bdevs_discovered": 3, 00:08:45.192 "num_base_bdevs_operational": 3, 00:08:45.192 "base_bdevs_list": [ 00:08:45.192 { 00:08:45.192 "name": "pt1", 00:08:45.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:45.192 "is_configured": true, 00:08:45.192 "data_offset": 2048, 00:08:45.192 "data_size": 63488 00:08:45.192 }, 00:08:45.192 { 00:08:45.192 "name": "pt2", 00:08:45.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.192 "is_configured": true, 00:08:45.192 "data_offset": 2048, 00:08:45.192 "data_size": 63488 00:08:45.192 }, 00:08:45.192 { 00:08:45.192 "name": "pt3", 00:08:45.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.192 "is_configured": true, 00:08:45.192 "data_offset": 2048, 00:08:45.192 "data_size": 63488 00:08:45.192 } 00:08:45.192 ] 00:08:45.192 } 00:08:45.192 } 00:08:45.192 }' 00:08:45.192 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.192 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:45.192 pt2 00:08:45.192 pt3' 00:08:45.192 21:40:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.192 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.192 [2024-09-29 21:40:04.174645] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 929411dc-5947-412e-b35d-598640b0e6df '!=' 929411dc-5947-412e-b35d-598640b0e6df ']' 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65123 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65123 ']' 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65123 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65123 00:08:45.452 killing process with pid 65123 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65123' 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65123 00:08:45.452 [2024-09-29 21:40:04.256874] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.452 [2024-09-29 21:40:04.256982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.452 [2024-09-29 21:40:04.257058] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.452 21:40:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65123 00:08:45.452 [2024-09-29 21:40:04.257074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:45.712 [2024-09-29 21:40:04.573506] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:47.092 21:40:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:47.092 00:08:47.092 real 0m5.467s 00:08:47.092 user 0m7.594s 00:08:47.092 sys 0m1.067s 00:08:47.092 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.092 21:40:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.092 ************************************ 00:08:47.092 END TEST raid_superblock_test 00:08:47.092 ************************************ 00:08:47.092 21:40:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:47.092 21:40:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:47.092 21:40:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.092 21:40:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:47.092 ************************************ 00:08:47.092 START TEST raid_read_error_test 00:08:47.092 ************************************ 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.M6rzUalDLR 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65377 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65377 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65377 ']' 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.092 21:40:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.352 [2024-09-29 21:40:06.089272] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:47.352 [2024-09-29 21:40:06.089408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65377 ] 00:08:47.352 [2024-09-29 21:40:06.252468] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.611 [2024-09-29 21:40:06.494828] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.870 [2024-09-29 21:40:06.715712] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.870 [2024-09-29 21:40:06.715754] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.130 BaseBdev1_malloc 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.130 true 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.130 [2024-09-29 21:40:06.956443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:48.130 [2024-09-29 21:40:06.956508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.130 [2024-09-29 21:40:06.956526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:48.130 [2024-09-29 21:40:06.956538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.130 [2024-09-29 21:40:06.958887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.130 [2024-09-29 21:40:06.958925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:48.130 BaseBdev1 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.130 21:40:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.130 BaseBdev2_malloc 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.130 true 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.130 [2024-09-29 21:40:07.058797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:48.130 [2024-09-29 21:40:07.058921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.130 [2024-09-29 21:40:07.058941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:48.130 [2024-09-29 21:40:07.058953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.130 [2024-09-29 21:40:07.061321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.130 [2024-09-29 21:40:07.061362] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:48.130 BaseBdev2 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.130 BaseBdev3_malloc 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.130 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.389 true 00:08:48.389 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.390 [2024-09-29 21:40:07.131147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:48.390 [2024-09-29 21:40:07.131256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.390 [2024-09-29 21:40:07.131300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:48.390 [2024-09-29 21:40:07.131335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.390 [2024-09-29 21:40:07.133736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.390 [2024-09-29 21:40:07.133774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:48.390 BaseBdev3 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.390 [2024-09-29 21:40:07.143203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.390 [2024-09-29 21:40:07.145308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.390 [2024-09-29 21:40:07.145438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.390 [2024-09-29 21:40:07.145702] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:48.390 [2024-09-29 21:40:07.145749] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:48.390 [2024-09-29 21:40:07.146019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:48.390 [2024-09-29 21:40:07.146227] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:48.390 [2024-09-29 21:40:07.146269] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:48.390 [2024-09-29 21:40:07.146438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.390 "name": "raid_bdev1", 00:08:48.390 "uuid": "49c73752-206e-4679-886c-b1f75a82846e", 00:08:48.390 "strip_size_kb": 64, 00:08:48.390 "state": "online", 00:08:48.390 "raid_level": "raid0", 00:08:48.390 "superblock": true, 00:08:48.390 "num_base_bdevs": 3, 00:08:48.390 "num_base_bdevs_discovered": 3, 00:08:48.390 "num_base_bdevs_operational": 3, 00:08:48.390 "base_bdevs_list": [ 00:08:48.390 { 00:08:48.390 "name": "BaseBdev1", 00:08:48.390 "uuid": "a80b2b2d-0462-5dbc-a00b-38ba8e200c8d", 00:08:48.390 "is_configured": true, 00:08:48.390 "data_offset": 2048, 00:08:48.390 "data_size": 63488 00:08:48.390 }, 00:08:48.390 { 00:08:48.390 "name": "BaseBdev2", 00:08:48.390 "uuid": "106ae9d2-f7a6-59fc-97fe-59e9606d94e1", 00:08:48.390 "is_configured": true, 00:08:48.390 "data_offset": 2048, 00:08:48.390 "data_size": 63488 00:08:48.390 }, 00:08:48.390 { 00:08:48.390 "name": "BaseBdev3", 00:08:48.390 "uuid": "87673e64-2dd0-523e-8ec9-5fc67a62d523", 00:08:48.390 "is_configured": true, 00:08:48.390 "data_offset": 2048, 00:08:48.390 "data_size": 63488 00:08:48.390 } 00:08:48.390 ] 00:08:48.390 }' 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.390 21:40:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.649 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:48.649 21:40:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:48.908 [2024-09-29 21:40:07.679612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.843 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.843 "name": "raid_bdev1", 00:08:49.843 "uuid": "49c73752-206e-4679-886c-b1f75a82846e", 00:08:49.843 "strip_size_kb": 64, 00:08:49.843 "state": "online", 00:08:49.843 "raid_level": "raid0", 00:08:49.843 "superblock": true, 00:08:49.843 "num_base_bdevs": 3, 00:08:49.843 "num_base_bdevs_discovered": 3, 00:08:49.843 "num_base_bdevs_operational": 3, 00:08:49.843 "base_bdevs_list": [ 00:08:49.843 { 00:08:49.843 "name": "BaseBdev1", 00:08:49.843 "uuid": "a80b2b2d-0462-5dbc-a00b-38ba8e200c8d", 00:08:49.843 "is_configured": true, 00:08:49.843 "data_offset": 2048, 00:08:49.843 "data_size": 63488 00:08:49.843 }, 00:08:49.843 { 00:08:49.843 "name": "BaseBdev2", 00:08:49.843 "uuid": "106ae9d2-f7a6-59fc-97fe-59e9606d94e1", 00:08:49.843 "is_configured": true, 00:08:49.844 "data_offset": 2048, 00:08:49.844 "data_size": 63488 00:08:49.844 }, 00:08:49.844 { 00:08:49.844 "name": "BaseBdev3", 00:08:49.844 "uuid": "87673e64-2dd0-523e-8ec9-5fc67a62d523", 00:08:49.844 "is_configured": true, 00:08:49.844 "data_offset": 2048, 00:08:49.844 "data_size": 63488 00:08:49.844 } 00:08:49.844 ] 00:08:49.844 }' 00:08:49.844 21:40:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.844 21:40:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.103 [2024-09-29 21:40:09.056226] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.103 [2024-09-29 21:40:09.056364] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.103 [2024-09-29 21:40:09.058978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.103 [2024-09-29 21:40:09.059026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.103 [2024-09-29 21:40:09.059074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.103 [2024-09-29 21:40:09.059084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:50.103 { 00:08:50.103 "results": [ 00:08:50.103 { 00:08:50.103 "job": "raid_bdev1", 00:08:50.103 "core_mask": "0x1", 00:08:50.103 "workload": "randrw", 00:08:50.103 "percentage": 50, 00:08:50.103 "status": "finished", 00:08:50.103 "queue_depth": 1, 00:08:50.103 "io_size": 131072, 00:08:50.103 "runtime": 1.377145, 00:08:50.103 "iops": 14508.276180068184, 00:08:50.103 "mibps": 1813.534522508523, 00:08:50.103 "io_failed": 1, 00:08:50.103 "io_timeout": 0, 00:08:50.103 "avg_latency_us": 97.25302618273386, 00:08:50.103 "min_latency_us": 21.575545851528386, 00:08:50.103 "max_latency_us": 1359.3711790393013 00:08:50.103 } 00:08:50.103 ], 00:08:50.103 "core_count": 1 00:08:50.103 } 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65377 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65377 ']' 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65377 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.103 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65377 00:08:50.362 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.362 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.362 killing process with pid 65377 00:08:50.362 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65377' 00:08:50.362 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65377 00:08:50.362 [2024-09-29 21:40:09.096564] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.362 21:40:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65377 00:08:50.362 [2024-09-29 21:40:09.342579] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.M6rzUalDLR 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:52.273 ************************************ 00:08:52.273 END TEST raid_read_error_test 00:08:52.273 ************************************ 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:52.273 00:08:52.273 real 0m4.768s 00:08:52.273 user 0m5.464s 00:08:52.273 sys 0m0.691s 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.273 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.273 21:40:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:52.273 21:40:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:52.273 21:40:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.273 21:40:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.273 ************************************ 00:08:52.273 START TEST raid_write_error_test 00:08:52.273 ************************************ 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:52.273 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RPZrA38JC9 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65527 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65527 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65527 ']' 00:08:52.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.274 21:40:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.274 [2024-09-29 21:40:10.933430] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:52.274 [2024-09-29 21:40:10.933563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65527 ] 00:08:52.274 [2024-09-29 21:40:11.096230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.534 [2024-09-29 21:40:11.339419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.794 [2024-09-29 21:40:11.568882] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.794 [2024-09-29 21:40:11.568994] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.794 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.794 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:52.794 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.794 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:52.794 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.794 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 BaseBdev1_malloc 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 true 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 [2024-09-29 21:40:11.826863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:53.055 [2024-09-29 21:40:11.826999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.055 [2024-09-29 21:40:11.827043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:53.055 [2024-09-29 21:40:11.827075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.055 [2024-09-29 21:40:11.829429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.055 [2024-09-29 21:40:11.829525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:53.055 BaseBdev1 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 BaseBdev2_malloc 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 true 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 [2024-09-29 21:40:11.927053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:53.055 [2024-09-29 21:40:11.927166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.055 [2024-09-29 21:40:11.927200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:53.055 [2024-09-29 21:40:11.927229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.055 [2024-09-29 21:40:11.929600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.055 [2024-09-29 21:40:11.929693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:53.055 BaseBdev2 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 BaseBdev3_malloc 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 true 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 [2024-09-29 21:40:12.000250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:53.055 [2024-09-29 21:40:12.000381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.055 [2024-09-29 21:40:12.000416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:53.055 [2024-09-29 21:40:12.000445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.055 [2024-09-29 21:40:12.002765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.055 [2024-09-29 21:40:12.002837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:53.055 BaseBdev3 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.055 [2024-09-29 21:40:12.012321] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.055 [2024-09-29 21:40:12.014331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.055 [2024-09-29 21:40:12.014401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:53.055 [2024-09-29 21:40:12.014584] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:53.055 [2024-09-29 21:40:12.014595] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:53.055 [2024-09-29 21:40:12.014821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:53.055 [2024-09-29 21:40:12.014967] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:53.055 [2024-09-29 21:40:12.014979] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:53.055 [2024-09-29 21:40:12.015154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.055 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.315 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.315 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.315 "name": "raid_bdev1", 00:08:53.315 "uuid": "0972198d-afb2-41f4-a63f-787759a8681b", 00:08:53.315 "strip_size_kb": 64, 00:08:53.315 "state": "online", 00:08:53.315 "raid_level": "raid0", 00:08:53.315 "superblock": true, 00:08:53.315 "num_base_bdevs": 3, 00:08:53.315 "num_base_bdevs_discovered": 3, 00:08:53.315 "num_base_bdevs_operational": 3, 00:08:53.315 "base_bdevs_list": [ 00:08:53.315 { 00:08:53.315 "name": "BaseBdev1", 00:08:53.315 "uuid": "6a2ca5e1-fc17-5cd9-ab5d-87bc3127396d", 00:08:53.315 "is_configured": true, 00:08:53.315 "data_offset": 2048, 00:08:53.315 "data_size": 63488 00:08:53.315 }, 00:08:53.315 { 00:08:53.315 "name": "BaseBdev2", 00:08:53.315 "uuid": "cf1a2b8c-6f67-5640-8d60-3bb1a2fb2387", 00:08:53.315 "is_configured": true, 00:08:53.315 "data_offset": 2048, 00:08:53.315 "data_size": 63488 00:08:53.315 }, 00:08:53.315 { 00:08:53.315 "name": "BaseBdev3", 00:08:53.315 "uuid": "4b2346bb-23f7-5dc4-a99a-529e0b6a3a31", 00:08:53.315 "is_configured": true, 00:08:53.315 "data_offset": 2048, 00:08:53.315 "data_size": 63488 00:08:53.315 } 00:08:53.315 ] 00:08:53.315 }' 00:08:53.315 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.315 21:40:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.574 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:53.574 21:40:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:53.835 [2024-09-29 21:40:12.580777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.773 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.774 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.774 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.774 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.774 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.774 "name": "raid_bdev1", 00:08:54.774 "uuid": "0972198d-afb2-41f4-a63f-787759a8681b", 00:08:54.774 "strip_size_kb": 64, 00:08:54.774 "state": "online", 00:08:54.774 "raid_level": "raid0", 00:08:54.774 "superblock": true, 00:08:54.774 "num_base_bdevs": 3, 00:08:54.774 "num_base_bdevs_discovered": 3, 00:08:54.774 "num_base_bdevs_operational": 3, 00:08:54.774 "base_bdevs_list": [ 00:08:54.774 { 00:08:54.774 "name": "BaseBdev1", 00:08:54.774 "uuid": "6a2ca5e1-fc17-5cd9-ab5d-87bc3127396d", 00:08:54.774 "is_configured": true, 00:08:54.774 "data_offset": 2048, 00:08:54.774 "data_size": 63488 00:08:54.774 }, 00:08:54.774 { 00:08:54.774 "name": "BaseBdev2", 00:08:54.774 "uuid": "cf1a2b8c-6f67-5640-8d60-3bb1a2fb2387", 00:08:54.774 "is_configured": true, 00:08:54.774 "data_offset": 2048, 00:08:54.774 "data_size": 63488 00:08:54.774 }, 00:08:54.774 { 00:08:54.774 "name": "BaseBdev3", 00:08:54.774 "uuid": "4b2346bb-23f7-5dc4-a99a-529e0b6a3a31", 00:08:54.774 "is_configured": true, 00:08:54.774 "data_offset": 2048, 00:08:54.774 "data_size": 63488 00:08:54.774 } 00:08:54.774 ] 00:08:54.774 }' 00:08:54.774 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.774 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.033 [2024-09-29 21:40:13.973209] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.033 [2024-09-29 21:40:13.973326] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.033 [2024-09-29 21:40:13.975952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.033 [2024-09-29 21:40:13.976050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.033 [2024-09-29 21:40:13.976114] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.033 [2024-09-29 21:40:13.976173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:55.033 { 00:08:55.033 "results": [ 00:08:55.033 { 00:08:55.033 "job": "raid_bdev1", 00:08:55.033 "core_mask": "0x1", 00:08:55.033 "workload": "randrw", 00:08:55.033 "percentage": 50, 00:08:55.033 "status": "finished", 00:08:55.033 "queue_depth": 1, 00:08:55.033 "io_size": 131072, 00:08:55.033 "runtime": 1.393277, 00:08:55.033 "iops": 14600.11182270288, 00:08:55.033 "mibps": 1825.01397783786, 00:08:55.033 "io_failed": 1, 00:08:55.033 "io_timeout": 0, 00:08:55.033 "avg_latency_us": 96.46051646575638, 00:08:55.033 "min_latency_us": 24.258515283842794, 00:08:55.033 "max_latency_us": 1345.0620087336245 00:08:55.033 } 00:08:55.033 ], 00:08:55.033 "core_count": 1 00:08:55.033 } 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65527 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65527 ']' 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65527 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.033 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65527 00:08:55.293 killing process with pid 65527 00:08:55.293 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.293 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.293 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65527' 00:08:55.293 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65527 00:08:55.293 [2024-09-29 21:40:14.022674] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.293 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65527 00:08:55.293 [2024-09-29 21:40:14.265170] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RPZrA38JC9 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:56.691 ************************************ 00:08:56.691 END TEST raid_write_error_test 00:08:56.691 ************************************ 00:08:56.691 00:08:56.691 real 0m4.850s 00:08:56.691 user 0m5.596s 00:08:56.691 sys 0m0.712s 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.691 21:40:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.952 21:40:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:56.952 21:40:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:56.952 21:40:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:56.952 21:40:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.952 21:40:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.952 ************************************ 00:08:56.952 START TEST raid_state_function_test 00:08:56.952 ************************************ 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65671 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.952 Process raid pid: 65671 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65671' 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65671 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65671 ']' 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.952 21:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.952 [2024-09-29 21:40:15.843449] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:56.952 [2024-09-29 21:40:15.843630] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.212 [2024-09-29 21:40:16.011603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.472 [2024-09-29 21:40:16.248482] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.732 [2024-09-29 21:40:16.482466] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.732 [2024-09-29 21:40:16.482509] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.732 [2024-09-29 21:40:16.674426] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.732 [2024-09-29 21:40:16.674482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.732 [2024-09-29 21:40:16.674492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.732 [2024-09-29 21:40:16.674502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.732 [2024-09-29 21:40:16.674507] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.732 [2024-09-29 21:40:16.674516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.732 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.990 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.991 "name": "Existed_Raid", 00:08:57.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.991 "strip_size_kb": 64, 00:08:57.991 "state": "configuring", 00:08:57.991 "raid_level": "concat", 00:08:57.991 "superblock": false, 00:08:57.991 "num_base_bdevs": 3, 00:08:57.991 "num_base_bdevs_discovered": 0, 00:08:57.991 "num_base_bdevs_operational": 3, 00:08:57.991 "base_bdevs_list": [ 00:08:57.991 { 00:08:57.991 "name": "BaseBdev1", 00:08:57.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.991 "is_configured": false, 00:08:57.991 "data_offset": 0, 00:08:57.991 "data_size": 0 00:08:57.991 }, 00:08:57.991 { 00:08:57.991 "name": "BaseBdev2", 00:08:57.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.991 "is_configured": false, 00:08:57.991 "data_offset": 0, 00:08:57.991 "data_size": 0 00:08:57.991 }, 00:08:57.991 { 00:08:57.991 "name": "BaseBdev3", 00:08:57.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.991 "is_configured": false, 00:08:57.991 "data_offset": 0, 00:08:57.991 "data_size": 0 00:08:57.991 } 00:08:57.991 ] 00:08:57.991 }' 00:08:57.991 21:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.991 21:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.250 [2024-09-29 21:40:17.109599] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.250 [2024-09-29 21:40:17.109722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.250 [2024-09-29 21:40:17.121609] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.250 [2024-09-29 21:40:17.121717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.250 [2024-09-29 21:40:17.121743] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.250 [2024-09-29 21:40:17.121766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.250 [2024-09-29 21:40:17.121783] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.250 [2024-09-29 21:40:17.121803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.250 [2024-09-29 21:40:17.206219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.250 BaseBdev1 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.250 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.250 [ 00:08:58.250 { 00:08:58.250 "name": "BaseBdev1", 00:08:58.250 "aliases": [ 00:08:58.250 "896640b0-9b0f-401e-8231-de88761ede80" 00:08:58.250 ], 00:08:58.250 "product_name": "Malloc disk", 00:08:58.250 "block_size": 512, 00:08:58.250 "num_blocks": 65536, 00:08:58.250 "uuid": "896640b0-9b0f-401e-8231-de88761ede80", 00:08:58.250 "assigned_rate_limits": { 00:08:58.250 "rw_ios_per_sec": 0, 00:08:58.250 "rw_mbytes_per_sec": 0, 00:08:58.250 "r_mbytes_per_sec": 0, 00:08:58.250 "w_mbytes_per_sec": 0 00:08:58.510 }, 00:08:58.510 "claimed": true, 00:08:58.510 "claim_type": "exclusive_write", 00:08:58.510 "zoned": false, 00:08:58.510 "supported_io_types": { 00:08:58.510 "read": true, 00:08:58.510 "write": true, 00:08:58.510 "unmap": true, 00:08:58.510 "flush": true, 00:08:58.510 "reset": true, 00:08:58.510 "nvme_admin": false, 00:08:58.510 "nvme_io": false, 00:08:58.510 "nvme_io_md": false, 00:08:58.510 "write_zeroes": true, 00:08:58.510 "zcopy": true, 00:08:58.510 "get_zone_info": false, 00:08:58.510 "zone_management": false, 00:08:58.510 "zone_append": false, 00:08:58.510 "compare": false, 00:08:58.510 "compare_and_write": false, 00:08:58.510 "abort": true, 00:08:58.510 "seek_hole": false, 00:08:58.510 "seek_data": false, 00:08:58.510 "copy": true, 00:08:58.510 "nvme_iov_md": false 00:08:58.510 }, 00:08:58.510 "memory_domains": [ 00:08:58.510 { 00:08:58.510 "dma_device_id": "system", 00:08:58.510 "dma_device_type": 1 00:08:58.510 }, 00:08:58.510 { 00:08:58.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.510 "dma_device_type": 2 00:08:58.510 } 00:08:58.510 ], 00:08:58.510 "driver_specific": {} 00:08:58.510 } 00:08:58.510 ] 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.510 "name": "Existed_Raid", 00:08:58.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.510 "strip_size_kb": 64, 00:08:58.510 "state": "configuring", 00:08:58.510 "raid_level": "concat", 00:08:58.510 "superblock": false, 00:08:58.510 "num_base_bdevs": 3, 00:08:58.510 "num_base_bdevs_discovered": 1, 00:08:58.510 "num_base_bdevs_operational": 3, 00:08:58.510 "base_bdevs_list": [ 00:08:58.510 { 00:08:58.510 "name": "BaseBdev1", 00:08:58.510 "uuid": "896640b0-9b0f-401e-8231-de88761ede80", 00:08:58.510 "is_configured": true, 00:08:58.510 "data_offset": 0, 00:08:58.510 "data_size": 65536 00:08:58.510 }, 00:08:58.510 { 00:08:58.510 "name": "BaseBdev2", 00:08:58.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.510 "is_configured": false, 00:08:58.510 "data_offset": 0, 00:08:58.510 "data_size": 0 00:08:58.510 }, 00:08:58.510 { 00:08:58.510 "name": "BaseBdev3", 00:08:58.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.510 "is_configured": false, 00:08:58.510 "data_offset": 0, 00:08:58.510 "data_size": 0 00:08:58.510 } 00:08:58.510 ] 00:08:58.510 }' 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.510 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.770 [2024-09-29 21:40:17.693381] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.770 [2024-09-29 21:40:17.693483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.770 [2024-09-29 21:40:17.705409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.770 [2024-09-29 21:40:17.707491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.770 [2024-09-29 21:40:17.707568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.770 [2024-09-29 21:40:17.707596] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.770 [2024-09-29 21:40:17.707617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.770 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.029 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.029 "name": "Existed_Raid", 00:08:59.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.029 "strip_size_kb": 64, 00:08:59.029 "state": "configuring", 00:08:59.029 "raid_level": "concat", 00:08:59.029 "superblock": false, 00:08:59.029 "num_base_bdevs": 3, 00:08:59.029 "num_base_bdevs_discovered": 1, 00:08:59.029 "num_base_bdevs_operational": 3, 00:08:59.029 "base_bdevs_list": [ 00:08:59.029 { 00:08:59.029 "name": "BaseBdev1", 00:08:59.029 "uuid": "896640b0-9b0f-401e-8231-de88761ede80", 00:08:59.029 "is_configured": true, 00:08:59.029 "data_offset": 0, 00:08:59.029 "data_size": 65536 00:08:59.029 }, 00:08:59.029 { 00:08:59.029 "name": "BaseBdev2", 00:08:59.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.029 "is_configured": false, 00:08:59.029 "data_offset": 0, 00:08:59.029 "data_size": 0 00:08:59.029 }, 00:08:59.029 { 00:08:59.029 "name": "BaseBdev3", 00:08:59.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.029 "is_configured": false, 00:08:59.029 "data_offset": 0, 00:08:59.029 "data_size": 0 00:08:59.029 } 00:08:59.029 ] 00:08:59.029 }' 00:08:59.029 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.029 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.290 [2024-09-29 21:40:18.182735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.290 BaseBdev2 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.290 [ 00:08:59.290 { 00:08:59.290 "name": "BaseBdev2", 00:08:59.290 "aliases": [ 00:08:59.290 "65b9b6ad-4c68-4e2d-b013-d43114f4ff6b" 00:08:59.290 ], 00:08:59.290 "product_name": "Malloc disk", 00:08:59.290 "block_size": 512, 00:08:59.290 "num_blocks": 65536, 00:08:59.290 "uuid": "65b9b6ad-4c68-4e2d-b013-d43114f4ff6b", 00:08:59.290 "assigned_rate_limits": { 00:08:59.290 "rw_ios_per_sec": 0, 00:08:59.290 "rw_mbytes_per_sec": 0, 00:08:59.290 "r_mbytes_per_sec": 0, 00:08:59.290 "w_mbytes_per_sec": 0 00:08:59.290 }, 00:08:59.290 "claimed": true, 00:08:59.290 "claim_type": "exclusive_write", 00:08:59.290 "zoned": false, 00:08:59.290 "supported_io_types": { 00:08:59.290 "read": true, 00:08:59.290 "write": true, 00:08:59.290 "unmap": true, 00:08:59.290 "flush": true, 00:08:59.290 "reset": true, 00:08:59.290 "nvme_admin": false, 00:08:59.290 "nvme_io": false, 00:08:59.290 "nvme_io_md": false, 00:08:59.290 "write_zeroes": true, 00:08:59.290 "zcopy": true, 00:08:59.290 "get_zone_info": false, 00:08:59.290 "zone_management": false, 00:08:59.290 "zone_append": false, 00:08:59.290 "compare": false, 00:08:59.290 "compare_and_write": false, 00:08:59.290 "abort": true, 00:08:59.290 "seek_hole": false, 00:08:59.290 "seek_data": false, 00:08:59.290 "copy": true, 00:08:59.290 "nvme_iov_md": false 00:08:59.290 }, 00:08:59.290 "memory_domains": [ 00:08:59.290 { 00:08:59.290 "dma_device_id": "system", 00:08:59.290 "dma_device_type": 1 00:08:59.290 }, 00:08:59.290 { 00:08:59.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.290 "dma_device_type": 2 00:08:59.290 } 00:08:59.290 ], 00:08:59.290 "driver_specific": {} 00:08:59.290 } 00:08:59.290 ] 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.290 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.550 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.550 "name": "Existed_Raid", 00:08:59.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.550 "strip_size_kb": 64, 00:08:59.550 "state": "configuring", 00:08:59.550 "raid_level": "concat", 00:08:59.550 "superblock": false, 00:08:59.550 "num_base_bdevs": 3, 00:08:59.550 "num_base_bdevs_discovered": 2, 00:08:59.550 "num_base_bdevs_operational": 3, 00:08:59.550 "base_bdevs_list": [ 00:08:59.550 { 00:08:59.550 "name": "BaseBdev1", 00:08:59.550 "uuid": "896640b0-9b0f-401e-8231-de88761ede80", 00:08:59.550 "is_configured": true, 00:08:59.550 "data_offset": 0, 00:08:59.550 "data_size": 65536 00:08:59.550 }, 00:08:59.550 { 00:08:59.550 "name": "BaseBdev2", 00:08:59.550 "uuid": "65b9b6ad-4c68-4e2d-b013-d43114f4ff6b", 00:08:59.550 "is_configured": true, 00:08:59.550 "data_offset": 0, 00:08:59.550 "data_size": 65536 00:08:59.550 }, 00:08:59.550 { 00:08:59.550 "name": "BaseBdev3", 00:08:59.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.550 "is_configured": false, 00:08:59.550 "data_offset": 0, 00:08:59.550 "data_size": 0 00:08:59.550 } 00:08:59.550 ] 00:08:59.550 }' 00:08:59.550 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.550 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.809 [2024-09-29 21:40:18.665503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.809 [2024-09-29 21:40:18.665650] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.809 [2024-09-29 21:40:18.665684] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:59.809 [2024-09-29 21:40:18.666048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:59.809 [2024-09-29 21:40:18.666307] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.809 [2024-09-29 21:40:18.666353] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:59.809 [2024-09-29 21:40:18.666695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.809 BaseBdev3 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.809 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.809 [ 00:08:59.809 { 00:08:59.809 "name": "BaseBdev3", 00:08:59.809 "aliases": [ 00:08:59.810 "942bdfaa-184c-4aa5-a722-7d77651cd189" 00:08:59.810 ], 00:08:59.810 "product_name": "Malloc disk", 00:08:59.810 "block_size": 512, 00:08:59.810 "num_blocks": 65536, 00:08:59.810 "uuid": "942bdfaa-184c-4aa5-a722-7d77651cd189", 00:08:59.810 "assigned_rate_limits": { 00:08:59.810 "rw_ios_per_sec": 0, 00:08:59.810 "rw_mbytes_per_sec": 0, 00:08:59.810 "r_mbytes_per_sec": 0, 00:08:59.810 "w_mbytes_per_sec": 0 00:08:59.810 }, 00:08:59.810 "claimed": true, 00:08:59.810 "claim_type": "exclusive_write", 00:08:59.810 "zoned": false, 00:08:59.810 "supported_io_types": { 00:08:59.810 "read": true, 00:08:59.810 "write": true, 00:08:59.810 "unmap": true, 00:08:59.810 "flush": true, 00:08:59.810 "reset": true, 00:08:59.810 "nvme_admin": false, 00:08:59.810 "nvme_io": false, 00:08:59.810 "nvme_io_md": false, 00:08:59.810 "write_zeroes": true, 00:08:59.810 "zcopy": true, 00:08:59.810 "get_zone_info": false, 00:08:59.810 "zone_management": false, 00:08:59.810 "zone_append": false, 00:08:59.810 "compare": false, 00:08:59.810 "compare_and_write": false, 00:08:59.810 "abort": true, 00:08:59.810 "seek_hole": false, 00:08:59.810 "seek_data": false, 00:08:59.810 "copy": true, 00:08:59.810 "nvme_iov_md": false 00:08:59.810 }, 00:08:59.810 "memory_domains": [ 00:08:59.810 { 00:08:59.810 "dma_device_id": "system", 00:08:59.810 "dma_device_type": 1 00:08:59.810 }, 00:08:59.810 { 00:08:59.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.810 "dma_device_type": 2 00:08:59.810 } 00:08:59.810 ], 00:08:59.810 "driver_specific": {} 00:08:59.810 } 00:08:59.810 ] 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.810 "name": "Existed_Raid", 00:08:59.810 "uuid": "d448965c-13eb-442c-b68d-0900b9fd1c79", 00:08:59.810 "strip_size_kb": 64, 00:08:59.810 "state": "online", 00:08:59.810 "raid_level": "concat", 00:08:59.810 "superblock": false, 00:08:59.810 "num_base_bdevs": 3, 00:08:59.810 "num_base_bdevs_discovered": 3, 00:08:59.810 "num_base_bdevs_operational": 3, 00:08:59.810 "base_bdevs_list": [ 00:08:59.810 { 00:08:59.810 "name": "BaseBdev1", 00:08:59.810 "uuid": "896640b0-9b0f-401e-8231-de88761ede80", 00:08:59.810 "is_configured": true, 00:08:59.810 "data_offset": 0, 00:08:59.810 "data_size": 65536 00:08:59.810 }, 00:08:59.810 { 00:08:59.810 "name": "BaseBdev2", 00:08:59.810 "uuid": "65b9b6ad-4c68-4e2d-b013-d43114f4ff6b", 00:08:59.810 "is_configured": true, 00:08:59.810 "data_offset": 0, 00:08:59.810 "data_size": 65536 00:08:59.810 }, 00:08:59.810 { 00:08:59.810 "name": "BaseBdev3", 00:08:59.810 "uuid": "942bdfaa-184c-4aa5-a722-7d77651cd189", 00:08:59.810 "is_configured": true, 00:08:59.810 "data_offset": 0, 00:08:59.810 "data_size": 65536 00:08:59.810 } 00:08:59.810 ] 00:08:59.810 }' 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.810 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.379 [2024-09-29 21:40:19.165009] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.379 "name": "Existed_Raid", 00:09:00.379 "aliases": [ 00:09:00.379 "d448965c-13eb-442c-b68d-0900b9fd1c79" 00:09:00.379 ], 00:09:00.379 "product_name": "Raid Volume", 00:09:00.379 "block_size": 512, 00:09:00.379 "num_blocks": 196608, 00:09:00.379 "uuid": "d448965c-13eb-442c-b68d-0900b9fd1c79", 00:09:00.379 "assigned_rate_limits": { 00:09:00.379 "rw_ios_per_sec": 0, 00:09:00.379 "rw_mbytes_per_sec": 0, 00:09:00.379 "r_mbytes_per_sec": 0, 00:09:00.379 "w_mbytes_per_sec": 0 00:09:00.379 }, 00:09:00.379 "claimed": false, 00:09:00.379 "zoned": false, 00:09:00.379 "supported_io_types": { 00:09:00.379 "read": true, 00:09:00.379 "write": true, 00:09:00.379 "unmap": true, 00:09:00.379 "flush": true, 00:09:00.379 "reset": true, 00:09:00.379 "nvme_admin": false, 00:09:00.379 "nvme_io": false, 00:09:00.379 "nvme_io_md": false, 00:09:00.379 "write_zeroes": true, 00:09:00.379 "zcopy": false, 00:09:00.379 "get_zone_info": false, 00:09:00.379 "zone_management": false, 00:09:00.379 "zone_append": false, 00:09:00.379 "compare": false, 00:09:00.379 "compare_and_write": false, 00:09:00.379 "abort": false, 00:09:00.379 "seek_hole": false, 00:09:00.379 "seek_data": false, 00:09:00.379 "copy": false, 00:09:00.379 "nvme_iov_md": false 00:09:00.379 }, 00:09:00.379 "memory_domains": [ 00:09:00.379 { 00:09:00.379 "dma_device_id": "system", 00:09:00.379 "dma_device_type": 1 00:09:00.379 }, 00:09:00.379 { 00:09:00.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.379 "dma_device_type": 2 00:09:00.379 }, 00:09:00.379 { 00:09:00.379 "dma_device_id": "system", 00:09:00.379 "dma_device_type": 1 00:09:00.379 }, 00:09:00.379 { 00:09:00.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.379 "dma_device_type": 2 00:09:00.379 }, 00:09:00.379 { 00:09:00.379 "dma_device_id": "system", 00:09:00.379 "dma_device_type": 1 00:09:00.379 }, 00:09:00.379 { 00:09:00.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.379 "dma_device_type": 2 00:09:00.379 } 00:09:00.379 ], 00:09:00.379 "driver_specific": { 00:09:00.379 "raid": { 00:09:00.379 "uuid": "d448965c-13eb-442c-b68d-0900b9fd1c79", 00:09:00.379 "strip_size_kb": 64, 00:09:00.379 "state": "online", 00:09:00.379 "raid_level": "concat", 00:09:00.379 "superblock": false, 00:09:00.379 "num_base_bdevs": 3, 00:09:00.379 "num_base_bdevs_discovered": 3, 00:09:00.379 "num_base_bdevs_operational": 3, 00:09:00.379 "base_bdevs_list": [ 00:09:00.379 { 00:09:00.379 "name": "BaseBdev1", 00:09:00.379 "uuid": "896640b0-9b0f-401e-8231-de88761ede80", 00:09:00.379 "is_configured": true, 00:09:00.379 "data_offset": 0, 00:09:00.379 "data_size": 65536 00:09:00.379 }, 00:09:00.379 { 00:09:00.379 "name": "BaseBdev2", 00:09:00.379 "uuid": "65b9b6ad-4c68-4e2d-b013-d43114f4ff6b", 00:09:00.379 "is_configured": true, 00:09:00.379 "data_offset": 0, 00:09:00.379 "data_size": 65536 00:09:00.379 }, 00:09:00.379 { 00:09:00.379 "name": "BaseBdev3", 00:09:00.379 "uuid": "942bdfaa-184c-4aa5-a722-7d77651cd189", 00:09:00.379 "is_configured": true, 00:09:00.379 "data_offset": 0, 00:09:00.379 "data_size": 65536 00:09:00.379 } 00:09:00.379 ] 00:09:00.379 } 00:09:00.379 } 00:09:00.379 }' 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:00.379 BaseBdev2 00:09:00.379 BaseBdev3' 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.379 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.638 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.639 [2024-09-29 21:40:19.428289] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.639 [2024-09-29 21:40:19.428359] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.639 [2024-09-29 21:40:19.428440] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.639 "name": "Existed_Raid", 00:09:00.639 "uuid": "d448965c-13eb-442c-b68d-0900b9fd1c79", 00:09:00.639 "strip_size_kb": 64, 00:09:00.639 "state": "offline", 00:09:00.639 "raid_level": "concat", 00:09:00.639 "superblock": false, 00:09:00.639 "num_base_bdevs": 3, 00:09:00.639 "num_base_bdevs_discovered": 2, 00:09:00.639 "num_base_bdevs_operational": 2, 00:09:00.639 "base_bdevs_list": [ 00:09:00.639 { 00:09:00.639 "name": null, 00:09:00.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.639 "is_configured": false, 00:09:00.639 "data_offset": 0, 00:09:00.639 "data_size": 65536 00:09:00.639 }, 00:09:00.639 { 00:09:00.639 "name": "BaseBdev2", 00:09:00.639 "uuid": "65b9b6ad-4c68-4e2d-b013-d43114f4ff6b", 00:09:00.639 "is_configured": true, 00:09:00.639 "data_offset": 0, 00:09:00.639 "data_size": 65536 00:09:00.639 }, 00:09:00.639 { 00:09:00.639 "name": "BaseBdev3", 00:09:00.639 "uuid": "942bdfaa-184c-4aa5-a722-7d77651cd189", 00:09:00.639 "is_configured": true, 00:09:00.639 "data_offset": 0, 00:09:00.639 "data_size": 65536 00:09:00.639 } 00:09:00.639 ] 00:09:00.639 }' 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.639 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.207 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.207 [2024-09-29 21:40:19.994974] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.207 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.207 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.207 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.207 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.208 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.208 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.208 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:01.208 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.208 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:01.208 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:01.208 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:01.208 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.208 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.208 [2024-09-29 21:40:20.156122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:01.208 [2024-09-29 21:40:20.156176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.467 BaseBdev2 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.467 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.467 [ 00:09:01.468 { 00:09:01.468 "name": "BaseBdev2", 00:09:01.468 "aliases": [ 00:09:01.468 "e04cdeee-f7bf-44de-b232-229d67c06341" 00:09:01.468 ], 00:09:01.468 "product_name": "Malloc disk", 00:09:01.468 "block_size": 512, 00:09:01.468 "num_blocks": 65536, 00:09:01.468 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:01.468 "assigned_rate_limits": { 00:09:01.468 "rw_ios_per_sec": 0, 00:09:01.468 "rw_mbytes_per_sec": 0, 00:09:01.468 "r_mbytes_per_sec": 0, 00:09:01.468 "w_mbytes_per_sec": 0 00:09:01.468 }, 00:09:01.468 "claimed": false, 00:09:01.468 "zoned": false, 00:09:01.468 "supported_io_types": { 00:09:01.468 "read": true, 00:09:01.468 "write": true, 00:09:01.468 "unmap": true, 00:09:01.468 "flush": true, 00:09:01.468 "reset": true, 00:09:01.468 "nvme_admin": false, 00:09:01.468 "nvme_io": false, 00:09:01.468 "nvme_io_md": false, 00:09:01.468 "write_zeroes": true, 00:09:01.468 "zcopy": true, 00:09:01.468 "get_zone_info": false, 00:09:01.468 "zone_management": false, 00:09:01.468 "zone_append": false, 00:09:01.468 "compare": false, 00:09:01.468 "compare_and_write": false, 00:09:01.468 "abort": true, 00:09:01.468 "seek_hole": false, 00:09:01.468 "seek_data": false, 00:09:01.468 "copy": true, 00:09:01.468 "nvme_iov_md": false 00:09:01.468 }, 00:09:01.468 "memory_domains": [ 00:09:01.468 { 00:09:01.468 "dma_device_id": "system", 00:09:01.468 "dma_device_type": 1 00:09:01.468 }, 00:09:01.468 { 00:09:01.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.468 "dma_device_type": 2 00:09:01.468 } 00:09:01.468 ], 00:09:01.468 "driver_specific": {} 00:09:01.468 } 00:09:01.468 ] 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.468 BaseBdev3 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.468 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.728 [ 00:09:01.728 { 00:09:01.728 "name": "BaseBdev3", 00:09:01.728 "aliases": [ 00:09:01.728 "09d4be4c-4693-48e2-9885-f1e291d4dbf5" 00:09:01.728 ], 00:09:01.728 "product_name": "Malloc disk", 00:09:01.728 "block_size": 512, 00:09:01.728 "num_blocks": 65536, 00:09:01.728 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:01.728 "assigned_rate_limits": { 00:09:01.728 "rw_ios_per_sec": 0, 00:09:01.728 "rw_mbytes_per_sec": 0, 00:09:01.728 "r_mbytes_per_sec": 0, 00:09:01.728 "w_mbytes_per_sec": 0 00:09:01.728 }, 00:09:01.728 "claimed": false, 00:09:01.728 "zoned": false, 00:09:01.728 "supported_io_types": { 00:09:01.728 "read": true, 00:09:01.728 "write": true, 00:09:01.728 "unmap": true, 00:09:01.728 "flush": true, 00:09:01.728 "reset": true, 00:09:01.728 "nvme_admin": false, 00:09:01.728 "nvme_io": false, 00:09:01.728 "nvme_io_md": false, 00:09:01.728 "write_zeroes": true, 00:09:01.728 "zcopy": true, 00:09:01.728 "get_zone_info": false, 00:09:01.728 "zone_management": false, 00:09:01.728 "zone_append": false, 00:09:01.728 "compare": false, 00:09:01.728 "compare_and_write": false, 00:09:01.728 "abort": true, 00:09:01.728 "seek_hole": false, 00:09:01.728 "seek_data": false, 00:09:01.728 "copy": true, 00:09:01.728 "nvme_iov_md": false 00:09:01.728 }, 00:09:01.728 "memory_domains": [ 00:09:01.728 { 00:09:01.728 "dma_device_id": "system", 00:09:01.728 "dma_device_type": 1 00:09:01.728 }, 00:09:01.728 { 00:09:01.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.728 "dma_device_type": 2 00:09:01.728 } 00:09:01.728 ], 00:09:01.728 "driver_specific": {} 00:09:01.728 } 00:09:01.728 ] 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.728 [2024-09-29 21:40:20.482362] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.728 [2024-09-29 21:40:20.482414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.728 [2024-09-29 21:40:20.482436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.728 [2024-09-29 21:40:20.484446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.728 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.728 "name": "Existed_Raid", 00:09:01.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.728 "strip_size_kb": 64, 00:09:01.728 "state": "configuring", 00:09:01.728 "raid_level": "concat", 00:09:01.728 "superblock": false, 00:09:01.728 "num_base_bdevs": 3, 00:09:01.728 "num_base_bdevs_discovered": 2, 00:09:01.728 "num_base_bdevs_operational": 3, 00:09:01.728 "base_bdevs_list": [ 00:09:01.728 { 00:09:01.728 "name": "BaseBdev1", 00:09:01.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.728 "is_configured": false, 00:09:01.728 "data_offset": 0, 00:09:01.728 "data_size": 0 00:09:01.728 }, 00:09:01.728 { 00:09:01.728 "name": "BaseBdev2", 00:09:01.728 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:01.728 "is_configured": true, 00:09:01.728 "data_offset": 0, 00:09:01.728 "data_size": 65536 00:09:01.728 }, 00:09:01.728 { 00:09:01.728 "name": "BaseBdev3", 00:09:01.728 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:01.728 "is_configured": true, 00:09:01.728 "data_offset": 0, 00:09:01.728 "data_size": 65536 00:09:01.728 } 00:09:01.728 ] 00:09:01.728 }' 00:09:01.729 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.729 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.988 [2024-09-29 21:40:20.953510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.988 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.248 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.248 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.248 "name": "Existed_Raid", 00:09:02.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.248 "strip_size_kb": 64, 00:09:02.248 "state": "configuring", 00:09:02.248 "raid_level": "concat", 00:09:02.248 "superblock": false, 00:09:02.248 "num_base_bdevs": 3, 00:09:02.248 "num_base_bdevs_discovered": 1, 00:09:02.248 "num_base_bdevs_operational": 3, 00:09:02.248 "base_bdevs_list": [ 00:09:02.248 { 00:09:02.248 "name": "BaseBdev1", 00:09:02.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.248 "is_configured": false, 00:09:02.248 "data_offset": 0, 00:09:02.248 "data_size": 0 00:09:02.248 }, 00:09:02.248 { 00:09:02.248 "name": null, 00:09:02.248 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:02.248 "is_configured": false, 00:09:02.248 "data_offset": 0, 00:09:02.248 "data_size": 65536 00:09:02.248 }, 00:09:02.248 { 00:09:02.248 "name": "BaseBdev3", 00:09:02.248 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:02.248 "is_configured": true, 00:09:02.248 "data_offset": 0, 00:09:02.248 "data_size": 65536 00:09:02.248 } 00:09:02.248 ] 00:09:02.248 }' 00:09:02.248 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.248 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.507 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.507 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.507 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.507 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:02.507 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.507 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:02.507 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:02.507 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.507 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.767 [2024-09-29 21:40:21.522681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.767 BaseBdev1 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.767 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.767 [ 00:09:02.767 { 00:09:02.767 "name": "BaseBdev1", 00:09:02.767 "aliases": [ 00:09:02.767 "9a32fa05-d70c-43cd-ba6a-4475506d780a" 00:09:02.767 ], 00:09:02.767 "product_name": "Malloc disk", 00:09:02.767 "block_size": 512, 00:09:02.767 "num_blocks": 65536, 00:09:02.767 "uuid": "9a32fa05-d70c-43cd-ba6a-4475506d780a", 00:09:02.767 "assigned_rate_limits": { 00:09:02.767 "rw_ios_per_sec": 0, 00:09:02.767 "rw_mbytes_per_sec": 0, 00:09:02.767 "r_mbytes_per_sec": 0, 00:09:02.767 "w_mbytes_per_sec": 0 00:09:02.767 }, 00:09:02.767 "claimed": true, 00:09:02.767 "claim_type": "exclusive_write", 00:09:02.767 "zoned": false, 00:09:02.767 "supported_io_types": { 00:09:02.767 "read": true, 00:09:02.767 "write": true, 00:09:02.768 "unmap": true, 00:09:02.768 "flush": true, 00:09:02.768 "reset": true, 00:09:02.768 "nvme_admin": false, 00:09:02.768 "nvme_io": false, 00:09:02.768 "nvme_io_md": false, 00:09:02.768 "write_zeroes": true, 00:09:02.768 "zcopy": true, 00:09:02.768 "get_zone_info": false, 00:09:02.768 "zone_management": false, 00:09:02.768 "zone_append": false, 00:09:02.768 "compare": false, 00:09:02.768 "compare_and_write": false, 00:09:02.768 "abort": true, 00:09:02.768 "seek_hole": false, 00:09:02.768 "seek_data": false, 00:09:02.768 "copy": true, 00:09:02.768 "nvme_iov_md": false 00:09:02.768 }, 00:09:02.768 "memory_domains": [ 00:09:02.768 { 00:09:02.768 "dma_device_id": "system", 00:09:02.768 "dma_device_type": 1 00:09:02.768 }, 00:09:02.768 { 00:09:02.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.768 "dma_device_type": 2 00:09:02.768 } 00:09:02.768 ], 00:09:02.768 "driver_specific": {} 00:09:02.768 } 00:09:02.768 ] 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.768 "name": "Existed_Raid", 00:09:02.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.768 "strip_size_kb": 64, 00:09:02.768 "state": "configuring", 00:09:02.768 "raid_level": "concat", 00:09:02.768 "superblock": false, 00:09:02.768 "num_base_bdevs": 3, 00:09:02.768 "num_base_bdevs_discovered": 2, 00:09:02.768 "num_base_bdevs_operational": 3, 00:09:02.768 "base_bdevs_list": [ 00:09:02.768 { 00:09:02.768 "name": "BaseBdev1", 00:09:02.768 "uuid": "9a32fa05-d70c-43cd-ba6a-4475506d780a", 00:09:02.768 "is_configured": true, 00:09:02.768 "data_offset": 0, 00:09:02.768 "data_size": 65536 00:09:02.768 }, 00:09:02.768 { 00:09:02.768 "name": null, 00:09:02.768 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:02.768 "is_configured": false, 00:09:02.768 "data_offset": 0, 00:09:02.768 "data_size": 65536 00:09:02.768 }, 00:09:02.768 { 00:09:02.768 "name": "BaseBdev3", 00:09:02.768 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:02.768 "is_configured": true, 00:09:02.768 "data_offset": 0, 00:09:02.768 "data_size": 65536 00:09:02.768 } 00:09:02.768 ] 00:09:02.768 }' 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.768 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.027 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.027 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.027 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.027 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.027 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.287 [2024-09-29 21:40:22.045840] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.287 "name": "Existed_Raid", 00:09:03.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.287 "strip_size_kb": 64, 00:09:03.287 "state": "configuring", 00:09:03.287 "raid_level": "concat", 00:09:03.287 "superblock": false, 00:09:03.287 "num_base_bdevs": 3, 00:09:03.287 "num_base_bdevs_discovered": 1, 00:09:03.287 "num_base_bdevs_operational": 3, 00:09:03.287 "base_bdevs_list": [ 00:09:03.287 { 00:09:03.287 "name": "BaseBdev1", 00:09:03.287 "uuid": "9a32fa05-d70c-43cd-ba6a-4475506d780a", 00:09:03.287 "is_configured": true, 00:09:03.287 "data_offset": 0, 00:09:03.287 "data_size": 65536 00:09:03.287 }, 00:09:03.287 { 00:09:03.287 "name": null, 00:09:03.287 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:03.287 "is_configured": false, 00:09:03.287 "data_offset": 0, 00:09:03.287 "data_size": 65536 00:09:03.287 }, 00:09:03.287 { 00:09:03.287 "name": null, 00:09:03.287 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:03.287 "is_configured": false, 00:09:03.287 "data_offset": 0, 00:09:03.287 "data_size": 65536 00:09:03.287 } 00:09:03.287 ] 00:09:03.287 }' 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.287 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.546 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.546 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.546 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.546 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.546 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.807 [2024-09-29 21:40:22.540996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.807 "name": "Existed_Raid", 00:09:03.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.807 "strip_size_kb": 64, 00:09:03.807 "state": "configuring", 00:09:03.807 "raid_level": "concat", 00:09:03.807 "superblock": false, 00:09:03.807 "num_base_bdevs": 3, 00:09:03.807 "num_base_bdevs_discovered": 2, 00:09:03.807 "num_base_bdevs_operational": 3, 00:09:03.807 "base_bdevs_list": [ 00:09:03.807 { 00:09:03.807 "name": "BaseBdev1", 00:09:03.807 "uuid": "9a32fa05-d70c-43cd-ba6a-4475506d780a", 00:09:03.807 "is_configured": true, 00:09:03.807 "data_offset": 0, 00:09:03.807 "data_size": 65536 00:09:03.807 }, 00:09:03.807 { 00:09:03.807 "name": null, 00:09:03.807 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:03.807 "is_configured": false, 00:09:03.807 "data_offset": 0, 00:09:03.807 "data_size": 65536 00:09:03.807 }, 00:09:03.807 { 00:09:03.807 "name": "BaseBdev3", 00:09:03.807 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:03.807 "is_configured": true, 00:09:03.807 "data_offset": 0, 00:09:03.807 "data_size": 65536 00:09:03.807 } 00:09:03.807 ] 00:09:03.807 }' 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.807 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.066 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.066 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.066 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:04.066 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.066 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.066 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:04.066 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.066 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.066 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.066 [2024-09-29 21:40:23.016300] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.326 "name": "Existed_Raid", 00:09:04.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.326 "strip_size_kb": 64, 00:09:04.326 "state": "configuring", 00:09:04.326 "raid_level": "concat", 00:09:04.326 "superblock": false, 00:09:04.326 "num_base_bdevs": 3, 00:09:04.326 "num_base_bdevs_discovered": 1, 00:09:04.326 "num_base_bdevs_operational": 3, 00:09:04.326 "base_bdevs_list": [ 00:09:04.326 { 00:09:04.326 "name": null, 00:09:04.326 "uuid": "9a32fa05-d70c-43cd-ba6a-4475506d780a", 00:09:04.326 "is_configured": false, 00:09:04.326 "data_offset": 0, 00:09:04.326 "data_size": 65536 00:09:04.326 }, 00:09:04.326 { 00:09:04.326 "name": null, 00:09:04.326 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:04.326 "is_configured": false, 00:09:04.326 "data_offset": 0, 00:09:04.326 "data_size": 65536 00:09:04.326 }, 00:09:04.326 { 00:09:04.326 "name": "BaseBdev3", 00:09:04.326 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:04.326 "is_configured": true, 00:09:04.326 "data_offset": 0, 00:09:04.326 "data_size": 65536 00:09:04.326 } 00:09:04.326 ] 00:09:04.326 }' 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.326 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.586 [2024-09-29 21:40:23.491349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.586 "name": "Existed_Raid", 00:09:04.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.586 "strip_size_kb": 64, 00:09:04.586 "state": "configuring", 00:09:04.586 "raid_level": "concat", 00:09:04.586 "superblock": false, 00:09:04.586 "num_base_bdevs": 3, 00:09:04.586 "num_base_bdevs_discovered": 2, 00:09:04.586 "num_base_bdevs_operational": 3, 00:09:04.586 "base_bdevs_list": [ 00:09:04.586 { 00:09:04.586 "name": null, 00:09:04.586 "uuid": "9a32fa05-d70c-43cd-ba6a-4475506d780a", 00:09:04.586 "is_configured": false, 00:09:04.586 "data_offset": 0, 00:09:04.586 "data_size": 65536 00:09:04.586 }, 00:09:04.586 { 00:09:04.586 "name": "BaseBdev2", 00:09:04.586 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:04.586 "is_configured": true, 00:09:04.586 "data_offset": 0, 00:09:04.586 "data_size": 65536 00:09:04.586 }, 00:09:04.586 { 00:09:04.586 "name": "BaseBdev3", 00:09:04.586 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:04.586 "is_configured": true, 00:09:04.586 "data_offset": 0, 00:09:04.586 "data_size": 65536 00:09:04.586 } 00:09:04.586 ] 00:09:04.586 }' 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.586 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.156 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.156 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.156 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.156 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:05.156 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9a32fa05-d70c-43cd-ba6a-4475506d780a 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.156 [2024-09-29 21:40:24.115491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:05.156 [2024-09-29 21:40:24.115538] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:05.156 [2024-09-29 21:40:24.115548] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:05.156 [2024-09-29 21:40:24.115821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:05.156 [2024-09-29 21:40:24.115978] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:05.156 [2024-09-29 21:40:24.116008] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:05.156 [2024-09-29 21:40:24.116324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.156 NewBaseBdev 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.156 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.416 [ 00:09:05.416 { 00:09:05.416 "name": "NewBaseBdev", 00:09:05.416 "aliases": [ 00:09:05.416 "9a32fa05-d70c-43cd-ba6a-4475506d780a" 00:09:05.416 ], 00:09:05.416 "product_name": "Malloc disk", 00:09:05.417 "block_size": 512, 00:09:05.417 "num_blocks": 65536, 00:09:05.417 "uuid": "9a32fa05-d70c-43cd-ba6a-4475506d780a", 00:09:05.417 "assigned_rate_limits": { 00:09:05.417 "rw_ios_per_sec": 0, 00:09:05.417 "rw_mbytes_per_sec": 0, 00:09:05.417 "r_mbytes_per_sec": 0, 00:09:05.417 "w_mbytes_per_sec": 0 00:09:05.417 }, 00:09:05.417 "claimed": true, 00:09:05.417 "claim_type": "exclusive_write", 00:09:05.417 "zoned": false, 00:09:05.417 "supported_io_types": { 00:09:05.417 "read": true, 00:09:05.417 "write": true, 00:09:05.417 "unmap": true, 00:09:05.417 "flush": true, 00:09:05.417 "reset": true, 00:09:05.417 "nvme_admin": false, 00:09:05.417 "nvme_io": false, 00:09:05.417 "nvme_io_md": false, 00:09:05.417 "write_zeroes": true, 00:09:05.417 "zcopy": true, 00:09:05.417 "get_zone_info": false, 00:09:05.417 "zone_management": false, 00:09:05.417 "zone_append": false, 00:09:05.417 "compare": false, 00:09:05.417 "compare_and_write": false, 00:09:05.417 "abort": true, 00:09:05.417 "seek_hole": false, 00:09:05.417 "seek_data": false, 00:09:05.417 "copy": true, 00:09:05.417 "nvme_iov_md": false 00:09:05.417 }, 00:09:05.417 "memory_domains": [ 00:09:05.417 { 00:09:05.417 "dma_device_id": "system", 00:09:05.417 "dma_device_type": 1 00:09:05.417 }, 00:09:05.417 { 00:09:05.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.417 "dma_device_type": 2 00:09:05.417 } 00:09:05.417 ], 00:09:05.417 "driver_specific": {} 00:09:05.417 } 00:09:05.417 ] 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.417 "name": "Existed_Raid", 00:09:05.417 "uuid": "0678975a-4b5e-444f-a7b2-880cd5ec1638", 00:09:05.417 "strip_size_kb": 64, 00:09:05.417 "state": "online", 00:09:05.417 "raid_level": "concat", 00:09:05.417 "superblock": false, 00:09:05.417 "num_base_bdevs": 3, 00:09:05.417 "num_base_bdevs_discovered": 3, 00:09:05.417 "num_base_bdevs_operational": 3, 00:09:05.417 "base_bdevs_list": [ 00:09:05.417 { 00:09:05.417 "name": "NewBaseBdev", 00:09:05.417 "uuid": "9a32fa05-d70c-43cd-ba6a-4475506d780a", 00:09:05.417 "is_configured": true, 00:09:05.417 "data_offset": 0, 00:09:05.417 "data_size": 65536 00:09:05.417 }, 00:09:05.417 { 00:09:05.417 "name": "BaseBdev2", 00:09:05.417 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:05.417 "is_configured": true, 00:09:05.417 "data_offset": 0, 00:09:05.417 "data_size": 65536 00:09:05.417 }, 00:09:05.417 { 00:09:05.417 "name": "BaseBdev3", 00:09:05.417 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:05.417 "is_configured": true, 00:09:05.417 "data_offset": 0, 00:09:05.417 "data_size": 65536 00:09:05.417 } 00:09:05.417 ] 00:09:05.417 }' 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.417 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.678 [2024-09-29 21:40:24.614900] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.678 "name": "Existed_Raid", 00:09:05.678 "aliases": [ 00:09:05.678 "0678975a-4b5e-444f-a7b2-880cd5ec1638" 00:09:05.678 ], 00:09:05.678 "product_name": "Raid Volume", 00:09:05.678 "block_size": 512, 00:09:05.678 "num_blocks": 196608, 00:09:05.678 "uuid": "0678975a-4b5e-444f-a7b2-880cd5ec1638", 00:09:05.678 "assigned_rate_limits": { 00:09:05.678 "rw_ios_per_sec": 0, 00:09:05.678 "rw_mbytes_per_sec": 0, 00:09:05.678 "r_mbytes_per_sec": 0, 00:09:05.678 "w_mbytes_per_sec": 0 00:09:05.678 }, 00:09:05.678 "claimed": false, 00:09:05.678 "zoned": false, 00:09:05.678 "supported_io_types": { 00:09:05.678 "read": true, 00:09:05.678 "write": true, 00:09:05.678 "unmap": true, 00:09:05.678 "flush": true, 00:09:05.678 "reset": true, 00:09:05.678 "nvme_admin": false, 00:09:05.678 "nvme_io": false, 00:09:05.678 "nvme_io_md": false, 00:09:05.678 "write_zeroes": true, 00:09:05.678 "zcopy": false, 00:09:05.678 "get_zone_info": false, 00:09:05.678 "zone_management": false, 00:09:05.678 "zone_append": false, 00:09:05.678 "compare": false, 00:09:05.678 "compare_and_write": false, 00:09:05.678 "abort": false, 00:09:05.678 "seek_hole": false, 00:09:05.678 "seek_data": false, 00:09:05.678 "copy": false, 00:09:05.678 "nvme_iov_md": false 00:09:05.678 }, 00:09:05.678 "memory_domains": [ 00:09:05.678 { 00:09:05.678 "dma_device_id": "system", 00:09:05.678 "dma_device_type": 1 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.678 "dma_device_type": 2 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "dma_device_id": "system", 00:09:05.678 "dma_device_type": 1 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.678 "dma_device_type": 2 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "dma_device_id": "system", 00:09:05.678 "dma_device_type": 1 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.678 "dma_device_type": 2 00:09:05.678 } 00:09:05.678 ], 00:09:05.678 "driver_specific": { 00:09:05.678 "raid": { 00:09:05.678 "uuid": "0678975a-4b5e-444f-a7b2-880cd5ec1638", 00:09:05.678 "strip_size_kb": 64, 00:09:05.678 "state": "online", 00:09:05.678 "raid_level": "concat", 00:09:05.678 "superblock": false, 00:09:05.678 "num_base_bdevs": 3, 00:09:05.678 "num_base_bdevs_discovered": 3, 00:09:05.678 "num_base_bdevs_operational": 3, 00:09:05.678 "base_bdevs_list": [ 00:09:05.678 { 00:09:05.678 "name": "NewBaseBdev", 00:09:05.678 "uuid": "9a32fa05-d70c-43cd-ba6a-4475506d780a", 00:09:05.678 "is_configured": true, 00:09:05.678 "data_offset": 0, 00:09:05.678 "data_size": 65536 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "name": "BaseBdev2", 00:09:05.678 "uuid": "e04cdeee-f7bf-44de-b232-229d67c06341", 00:09:05.678 "is_configured": true, 00:09:05.678 "data_offset": 0, 00:09:05.678 "data_size": 65536 00:09:05.678 }, 00:09:05.678 { 00:09:05.678 "name": "BaseBdev3", 00:09:05.678 "uuid": "09d4be4c-4693-48e2-9885-f1e291d4dbf5", 00:09:05.678 "is_configured": true, 00:09:05.678 "data_offset": 0, 00:09:05.678 "data_size": 65536 00:09:05.678 } 00:09:05.678 ] 00:09:05.678 } 00:09:05.678 } 00:09:05.678 }' 00:09:05.678 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:05.939 BaseBdev2 00:09:05.939 BaseBdev3' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.939 [2024-09-29 21:40:24.838232] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.939 [2024-09-29 21:40:24.838258] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.939 [2024-09-29 21:40:24.838324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.939 [2024-09-29 21:40:24.838373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.939 [2024-09-29 21:40:24.838385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65671 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65671 ']' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65671 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65671 00:09:05.939 killing process with pid 65671 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65671' 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65671 00:09:05.939 [2024-09-29 21:40:24.878633] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.939 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65671 00:09:06.509 [2024-09-29 21:40:25.195124] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:07.890 00:09:07.890 real 0m10.784s 00:09:07.890 user 0m16.729s 00:09:07.890 sys 0m2.037s 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.890 ************************************ 00:09:07.890 END TEST raid_state_function_test 00:09:07.890 ************************************ 00:09:07.890 21:40:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:07.890 21:40:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:07.890 21:40:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.890 21:40:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.890 ************************************ 00:09:07.890 START TEST raid_state_function_test_sb 00:09:07.890 ************************************ 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66295 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.890 Process raid pid: 66295 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66295' 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66295 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66295 ']' 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:07.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:07.890 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.890 [2024-09-29 21:40:26.708724] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:07.890 [2024-09-29 21:40:26.708838] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.150 [2024-09-29 21:40:26.877572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.150 [2024-09-29 21:40:27.129748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.410 [2024-09-29 21:40:27.361575] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.410 [2024-09-29 21:40:27.361614] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.669 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.669 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:08.669 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.669 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.669 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.669 [2024-09-29 21:40:27.524084] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.669 [2024-09-29 21:40:27.524140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.669 [2024-09-29 21:40:27.524152] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.669 [2024-09-29 21:40:27.524161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.669 [2024-09-29 21:40:27.524167] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.669 [2024-09-29 21:40:27.524177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.669 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.669 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.670 "name": "Existed_Raid", 00:09:08.670 "uuid": "8a290725-918c-46f8-96c5-c1d886b504ae", 00:09:08.670 "strip_size_kb": 64, 00:09:08.670 "state": "configuring", 00:09:08.670 "raid_level": "concat", 00:09:08.670 "superblock": true, 00:09:08.670 "num_base_bdevs": 3, 00:09:08.670 "num_base_bdevs_discovered": 0, 00:09:08.670 "num_base_bdevs_operational": 3, 00:09:08.670 "base_bdevs_list": [ 00:09:08.670 { 00:09:08.670 "name": "BaseBdev1", 00:09:08.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.670 "is_configured": false, 00:09:08.670 "data_offset": 0, 00:09:08.670 "data_size": 0 00:09:08.670 }, 00:09:08.670 { 00:09:08.670 "name": "BaseBdev2", 00:09:08.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.670 "is_configured": false, 00:09:08.670 "data_offset": 0, 00:09:08.670 "data_size": 0 00:09:08.670 }, 00:09:08.670 { 00:09:08.670 "name": "BaseBdev3", 00:09:08.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.670 "is_configured": false, 00:09:08.670 "data_offset": 0, 00:09:08.670 "data_size": 0 00:09:08.670 } 00:09:08.670 ] 00:09:08.670 }' 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.670 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.239 [2024-09-29 21:40:27.955207] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.239 [2024-09-29 21:40:27.955246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.239 [2024-09-29 21:40:27.967226] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.239 [2024-09-29 21:40:27.967269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.239 [2024-09-29 21:40:27.967277] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.239 [2024-09-29 21:40:27.967286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.239 [2024-09-29 21:40:27.967292] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.239 [2024-09-29 21:40:27.967301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.239 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.239 [2024-09-29 21:40:28.054267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.239 BaseBdev1 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.239 [ 00:09:09.239 { 00:09:09.239 "name": "BaseBdev1", 00:09:09.239 "aliases": [ 00:09:09.239 "833cbddc-e83e-4232-8941-f8405159bd89" 00:09:09.239 ], 00:09:09.239 "product_name": "Malloc disk", 00:09:09.239 "block_size": 512, 00:09:09.239 "num_blocks": 65536, 00:09:09.239 "uuid": "833cbddc-e83e-4232-8941-f8405159bd89", 00:09:09.239 "assigned_rate_limits": { 00:09:09.239 "rw_ios_per_sec": 0, 00:09:09.239 "rw_mbytes_per_sec": 0, 00:09:09.239 "r_mbytes_per_sec": 0, 00:09:09.239 "w_mbytes_per_sec": 0 00:09:09.239 }, 00:09:09.239 "claimed": true, 00:09:09.239 "claim_type": "exclusive_write", 00:09:09.239 "zoned": false, 00:09:09.239 "supported_io_types": { 00:09:09.239 "read": true, 00:09:09.239 "write": true, 00:09:09.239 "unmap": true, 00:09:09.239 "flush": true, 00:09:09.239 "reset": true, 00:09:09.239 "nvme_admin": false, 00:09:09.239 "nvme_io": false, 00:09:09.239 "nvme_io_md": false, 00:09:09.239 "write_zeroes": true, 00:09:09.239 "zcopy": true, 00:09:09.239 "get_zone_info": false, 00:09:09.239 "zone_management": false, 00:09:09.239 "zone_append": false, 00:09:09.239 "compare": false, 00:09:09.239 "compare_and_write": false, 00:09:09.239 "abort": true, 00:09:09.239 "seek_hole": false, 00:09:09.239 "seek_data": false, 00:09:09.239 "copy": true, 00:09:09.239 "nvme_iov_md": false 00:09:09.239 }, 00:09:09.239 "memory_domains": [ 00:09:09.239 { 00:09:09.239 "dma_device_id": "system", 00:09:09.239 "dma_device_type": 1 00:09:09.239 }, 00:09:09.239 { 00:09:09.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.239 "dma_device_type": 2 00:09:09.239 } 00:09:09.239 ], 00:09:09.239 "driver_specific": {} 00:09:09.239 } 00:09:09.239 ] 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.239 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.240 "name": "Existed_Raid", 00:09:09.240 "uuid": "88cfcd1c-ca89-431d-8cbd-a782590d880e", 00:09:09.240 "strip_size_kb": 64, 00:09:09.240 "state": "configuring", 00:09:09.240 "raid_level": "concat", 00:09:09.240 "superblock": true, 00:09:09.240 "num_base_bdevs": 3, 00:09:09.240 "num_base_bdevs_discovered": 1, 00:09:09.240 "num_base_bdevs_operational": 3, 00:09:09.240 "base_bdevs_list": [ 00:09:09.240 { 00:09:09.240 "name": "BaseBdev1", 00:09:09.240 "uuid": "833cbddc-e83e-4232-8941-f8405159bd89", 00:09:09.240 "is_configured": true, 00:09:09.240 "data_offset": 2048, 00:09:09.240 "data_size": 63488 00:09:09.240 }, 00:09:09.240 { 00:09:09.240 "name": "BaseBdev2", 00:09:09.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.240 "is_configured": false, 00:09:09.240 "data_offset": 0, 00:09:09.240 "data_size": 0 00:09:09.240 }, 00:09:09.240 { 00:09:09.240 "name": "BaseBdev3", 00:09:09.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.240 "is_configured": false, 00:09:09.240 "data_offset": 0, 00:09:09.240 "data_size": 0 00:09:09.240 } 00:09:09.240 ] 00:09:09.240 }' 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.240 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.815 [2024-09-29 21:40:28.537455] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.815 [2024-09-29 21:40:28.537498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.815 [2024-09-29 21:40:28.549504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:09.815 [2024-09-29 21:40:28.551631] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.815 [2024-09-29 21:40:28.551675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.815 [2024-09-29 21:40:28.551685] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.815 [2024-09-29 21:40:28.551695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.815 "name": "Existed_Raid", 00:09:09.815 "uuid": "199aa9e9-983f-468e-950c-4085b5bccbac", 00:09:09.815 "strip_size_kb": 64, 00:09:09.815 "state": "configuring", 00:09:09.815 "raid_level": "concat", 00:09:09.815 "superblock": true, 00:09:09.815 "num_base_bdevs": 3, 00:09:09.815 "num_base_bdevs_discovered": 1, 00:09:09.815 "num_base_bdevs_operational": 3, 00:09:09.815 "base_bdevs_list": [ 00:09:09.815 { 00:09:09.815 "name": "BaseBdev1", 00:09:09.815 "uuid": "833cbddc-e83e-4232-8941-f8405159bd89", 00:09:09.815 "is_configured": true, 00:09:09.815 "data_offset": 2048, 00:09:09.815 "data_size": 63488 00:09:09.815 }, 00:09:09.815 { 00:09:09.815 "name": "BaseBdev2", 00:09:09.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.815 "is_configured": false, 00:09:09.815 "data_offset": 0, 00:09:09.815 "data_size": 0 00:09:09.815 }, 00:09:09.815 { 00:09:09.815 "name": "BaseBdev3", 00:09:09.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.815 "is_configured": false, 00:09:09.815 "data_offset": 0, 00:09:09.815 "data_size": 0 00:09:09.815 } 00:09:09.815 ] 00:09:09.815 }' 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.815 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.090 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.090 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.090 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.090 [2024-09-29 21:40:29.068835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.090 BaseBdev2 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.360 [ 00:09:10.360 { 00:09:10.360 "name": "BaseBdev2", 00:09:10.360 "aliases": [ 00:09:10.360 "2b223522-63ab-4f4f-9a23-0bdc49023d70" 00:09:10.360 ], 00:09:10.360 "product_name": "Malloc disk", 00:09:10.360 "block_size": 512, 00:09:10.360 "num_blocks": 65536, 00:09:10.360 "uuid": "2b223522-63ab-4f4f-9a23-0bdc49023d70", 00:09:10.360 "assigned_rate_limits": { 00:09:10.360 "rw_ios_per_sec": 0, 00:09:10.360 "rw_mbytes_per_sec": 0, 00:09:10.360 "r_mbytes_per_sec": 0, 00:09:10.360 "w_mbytes_per_sec": 0 00:09:10.360 }, 00:09:10.360 "claimed": true, 00:09:10.360 "claim_type": "exclusive_write", 00:09:10.360 "zoned": false, 00:09:10.360 "supported_io_types": { 00:09:10.360 "read": true, 00:09:10.360 "write": true, 00:09:10.360 "unmap": true, 00:09:10.360 "flush": true, 00:09:10.360 "reset": true, 00:09:10.360 "nvme_admin": false, 00:09:10.360 "nvme_io": false, 00:09:10.360 "nvme_io_md": false, 00:09:10.360 "write_zeroes": true, 00:09:10.360 "zcopy": true, 00:09:10.360 "get_zone_info": false, 00:09:10.360 "zone_management": false, 00:09:10.360 "zone_append": false, 00:09:10.360 "compare": false, 00:09:10.360 "compare_and_write": false, 00:09:10.360 "abort": true, 00:09:10.360 "seek_hole": false, 00:09:10.360 "seek_data": false, 00:09:10.360 "copy": true, 00:09:10.360 "nvme_iov_md": false 00:09:10.360 }, 00:09:10.360 "memory_domains": [ 00:09:10.360 { 00:09:10.360 "dma_device_id": "system", 00:09:10.360 "dma_device_type": 1 00:09:10.360 }, 00:09:10.360 { 00:09:10.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.360 "dma_device_type": 2 00:09:10.360 } 00:09:10.360 ], 00:09:10.360 "driver_specific": {} 00:09:10.360 } 00:09:10.360 ] 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.360 "name": "Existed_Raid", 00:09:10.360 "uuid": "199aa9e9-983f-468e-950c-4085b5bccbac", 00:09:10.360 "strip_size_kb": 64, 00:09:10.360 "state": "configuring", 00:09:10.360 "raid_level": "concat", 00:09:10.360 "superblock": true, 00:09:10.360 "num_base_bdevs": 3, 00:09:10.360 "num_base_bdevs_discovered": 2, 00:09:10.360 "num_base_bdevs_operational": 3, 00:09:10.360 "base_bdevs_list": [ 00:09:10.360 { 00:09:10.360 "name": "BaseBdev1", 00:09:10.360 "uuid": "833cbddc-e83e-4232-8941-f8405159bd89", 00:09:10.360 "is_configured": true, 00:09:10.360 "data_offset": 2048, 00:09:10.360 "data_size": 63488 00:09:10.360 }, 00:09:10.360 { 00:09:10.360 "name": "BaseBdev2", 00:09:10.360 "uuid": "2b223522-63ab-4f4f-9a23-0bdc49023d70", 00:09:10.360 "is_configured": true, 00:09:10.360 "data_offset": 2048, 00:09:10.360 "data_size": 63488 00:09:10.360 }, 00:09:10.360 { 00:09:10.360 "name": "BaseBdev3", 00:09:10.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.360 "is_configured": false, 00:09:10.360 "data_offset": 0, 00:09:10.360 "data_size": 0 00:09:10.360 } 00:09:10.360 ] 00:09:10.360 }' 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.360 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.619 [2024-09-29 21:40:29.548153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.619 [2024-09-29 21:40:29.548439] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:10.619 [2024-09-29 21:40:29.548482] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.619 BaseBdev3 00:09:10.619 [2024-09-29 21:40:29.548962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:10.619 [2024-09-29 21:40:29.549157] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:10.619 [2024-09-29 21:40:29.549174] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:10.619 [2024-09-29 21:40:29.549333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.619 [ 00:09:10.619 { 00:09:10.619 "name": "BaseBdev3", 00:09:10.619 "aliases": [ 00:09:10.619 "55c486f1-000f-45e1-a239-997f69987243" 00:09:10.619 ], 00:09:10.619 "product_name": "Malloc disk", 00:09:10.619 "block_size": 512, 00:09:10.619 "num_blocks": 65536, 00:09:10.619 "uuid": "55c486f1-000f-45e1-a239-997f69987243", 00:09:10.619 "assigned_rate_limits": { 00:09:10.619 "rw_ios_per_sec": 0, 00:09:10.619 "rw_mbytes_per_sec": 0, 00:09:10.619 "r_mbytes_per_sec": 0, 00:09:10.619 "w_mbytes_per_sec": 0 00:09:10.619 }, 00:09:10.619 "claimed": true, 00:09:10.619 "claim_type": "exclusive_write", 00:09:10.619 "zoned": false, 00:09:10.619 "supported_io_types": { 00:09:10.619 "read": true, 00:09:10.619 "write": true, 00:09:10.619 "unmap": true, 00:09:10.619 "flush": true, 00:09:10.619 "reset": true, 00:09:10.619 "nvme_admin": false, 00:09:10.619 "nvme_io": false, 00:09:10.619 "nvme_io_md": false, 00:09:10.619 "write_zeroes": true, 00:09:10.619 "zcopy": true, 00:09:10.619 "get_zone_info": false, 00:09:10.619 "zone_management": false, 00:09:10.619 "zone_append": false, 00:09:10.619 "compare": false, 00:09:10.619 "compare_and_write": false, 00:09:10.619 "abort": true, 00:09:10.619 "seek_hole": false, 00:09:10.619 "seek_data": false, 00:09:10.619 "copy": true, 00:09:10.619 "nvme_iov_md": false 00:09:10.619 }, 00:09:10.619 "memory_domains": [ 00:09:10.619 { 00:09:10.619 "dma_device_id": "system", 00:09:10.619 "dma_device_type": 1 00:09:10.619 }, 00:09:10.619 { 00:09:10.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.619 "dma_device_type": 2 00:09:10.619 } 00:09:10.619 ], 00:09:10.619 "driver_specific": {} 00:09:10.619 } 00:09:10.619 ] 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.619 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.878 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.878 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.878 "name": "Existed_Raid", 00:09:10.878 "uuid": "199aa9e9-983f-468e-950c-4085b5bccbac", 00:09:10.878 "strip_size_kb": 64, 00:09:10.878 "state": "online", 00:09:10.878 "raid_level": "concat", 00:09:10.878 "superblock": true, 00:09:10.878 "num_base_bdevs": 3, 00:09:10.878 "num_base_bdevs_discovered": 3, 00:09:10.878 "num_base_bdevs_operational": 3, 00:09:10.878 "base_bdevs_list": [ 00:09:10.878 { 00:09:10.878 "name": "BaseBdev1", 00:09:10.878 "uuid": "833cbddc-e83e-4232-8941-f8405159bd89", 00:09:10.878 "is_configured": true, 00:09:10.878 "data_offset": 2048, 00:09:10.878 "data_size": 63488 00:09:10.878 }, 00:09:10.878 { 00:09:10.878 "name": "BaseBdev2", 00:09:10.878 "uuid": "2b223522-63ab-4f4f-9a23-0bdc49023d70", 00:09:10.878 "is_configured": true, 00:09:10.878 "data_offset": 2048, 00:09:10.878 "data_size": 63488 00:09:10.878 }, 00:09:10.878 { 00:09:10.878 "name": "BaseBdev3", 00:09:10.878 "uuid": "55c486f1-000f-45e1-a239-997f69987243", 00:09:10.878 "is_configured": true, 00:09:10.878 "data_offset": 2048, 00:09:10.878 "data_size": 63488 00:09:10.878 } 00:09:10.878 ] 00:09:10.878 }' 00:09:10.878 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.878 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.138 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.138 [2024-09-29 21:40:29.999600] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.138 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.138 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.138 "name": "Existed_Raid", 00:09:11.138 "aliases": [ 00:09:11.138 "199aa9e9-983f-468e-950c-4085b5bccbac" 00:09:11.138 ], 00:09:11.138 "product_name": "Raid Volume", 00:09:11.138 "block_size": 512, 00:09:11.138 "num_blocks": 190464, 00:09:11.138 "uuid": "199aa9e9-983f-468e-950c-4085b5bccbac", 00:09:11.138 "assigned_rate_limits": { 00:09:11.138 "rw_ios_per_sec": 0, 00:09:11.138 "rw_mbytes_per_sec": 0, 00:09:11.138 "r_mbytes_per_sec": 0, 00:09:11.138 "w_mbytes_per_sec": 0 00:09:11.138 }, 00:09:11.138 "claimed": false, 00:09:11.138 "zoned": false, 00:09:11.138 "supported_io_types": { 00:09:11.138 "read": true, 00:09:11.138 "write": true, 00:09:11.138 "unmap": true, 00:09:11.138 "flush": true, 00:09:11.138 "reset": true, 00:09:11.138 "nvme_admin": false, 00:09:11.138 "nvme_io": false, 00:09:11.138 "nvme_io_md": false, 00:09:11.138 "write_zeroes": true, 00:09:11.138 "zcopy": false, 00:09:11.138 "get_zone_info": false, 00:09:11.138 "zone_management": false, 00:09:11.138 "zone_append": false, 00:09:11.138 "compare": false, 00:09:11.138 "compare_and_write": false, 00:09:11.138 "abort": false, 00:09:11.138 "seek_hole": false, 00:09:11.138 "seek_data": false, 00:09:11.138 "copy": false, 00:09:11.138 "nvme_iov_md": false 00:09:11.138 }, 00:09:11.138 "memory_domains": [ 00:09:11.138 { 00:09:11.138 "dma_device_id": "system", 00:09:11.138 "dma_device_type": 1 00:09:11.138 }, 00:09:11.138 { 00:09:11.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.138 "dma_device_type": 2 00:09:11.138 }, 00:09:11.138 { 00:09:11.138 "dma_device_id": "system", 00:09:11.138 "dma_device_type": 1 00:09:11.138 }, 00:09:11.138 { 00:09:11.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.138 "dma_device_type": 2 00:09:11.138 }, 00:09:11.138 { 00:09:11.138 "dma_device_id": "system", 00:09:11.138 "dma_device_type": 1 00:09:11.138 }, 00:09:11.138 { 00:09:11.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.138 "dma_device_type": 2 00:09:11.138 } 00:09:11.138 ], 00:09:11.138 "driver_specific": { 00:09:11.138 "raid": { 00:09:11.138 "uuid": "199aa9e9-983f-468e-950c-4085b5bccbac", 00:09:11.138 "strip_size_kb": 64, 00:09:11.138 "state": "online", 00:09:11.138 "raid_level": "concat", 00:09:11.138 "superblock": true, 00:09:11.138 "num_base_bdevs": 3, 00:09:11.138 "num_base_bdevs_discovered": 3, 00:09:11.138 "num_base_bdevs_operational": 3, 00:09:11.138 "base_bdevs_list": [ 00:09:11.138 { 00:09:11.138 "name": "BaseBdev1", 00:09:11.138 "uuid": "833cbddc-e83e-4232-8941-f8405159bd89", 00:09:11.138 "is_configured": true, 00:09:11.138 "data_offset": 2048, 00:09:11.138 "data_size": 63488 00:09:11.138 }, 00:09:11.138 { 00:09:11.138 "name": "BaseBdev2", 00:09:11.138 "uuid": "2b223522-63ab-4f4f-9a23-0bdc49023d70", 00:09:11.138 "is_configured": true, 00:09:11.138 "data_offset": 2048, 00:09:11.138 "data_size": 63488 00:09:11.138 }, 00:09:11.138 { 00:09:11.138 "name": "BaseBdev3", 00:09:11.138 "uuid": "55c486f1-000f-45e1-a239-997f69987243", 00:09:11.138 "is_configured": true, 00:09:11.138 "data_offset": 2048, 00:09:11.138 "data_size": 63488 00:09:11.138 } 00:09:11.138 ] 00:09:11.138 } 00:09:11.138 } 00:09:11.138 }' 00:09:11.138 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:11.138 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:11.138 BaseBdev2 00:09:11.138 BaseBdev3' 00:09:11.138 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.398 [2024-09-29 21:40:30.238951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:11.398 [2024-09-29 21:40:30.239024] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.398 [2024-09-29 21:40:30.239106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.398 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.658 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.658 "name": "Existed_Raid", 00:09:11.658 "uuid": "199aa9e9-983f-468e-950c-4085b5bccbac", 00:09:11.658 "strip_size_kb": 64, 00:09:11.658 "state": "offline", 00:09:11.658 "raid_level": "concat", 00:09:11.658 "superblock": true, 00:09:11.658 "num_base_bdevs": 3, 00:09:11.658 "num_base_bdevs_discovered": 2, 00:09:11.658 "num_base_bdevs_operational": 2, 00:09:11.658 "base_bdevs_list": [ 00:09:11.658 { 00:09:11.658 "name": null, 00:09:11.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.658 "is_configured": false, 00:09:11.658 "data_offset": 0, 00:09:11.658 "data_size": 63488 00:09:11.658 }, 00:09:11.658 { 00:09:11.658 "name": "BaseBdev2", 00:09:11.658 "uuid": "2b223522-63ab-4f4f-9a23-0bdc49023d70", 00:09:11.658 "is_configured": true, 00:09:11.658 "data_offset": 2048, 00:09:11.658 "data_size": 63488 00:09:11.658 }, 00:09:11.658 { 00:09:11.658 "name": "BaseBdev3", 00:09:11.658 "uuid": "55c486f1-000f-45e1-a239-997f69987243", 00:09:11.658 "is_configured": true, 00:09:11.658 "data_offset": 2048, 00:09:11.658 "data_size": 63488 00:09:11.658 } 00:09:11.658 ] 00:09:11.658 }' 00:09:11.658 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.658 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.918 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.918 [2024-09-29 21:40:30.846234] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.177 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.177 [2024-09-29 21:40:31.002167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.177 [2024-09-29 21:40:31.002292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.177 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.177 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.178 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.437 BaseBdev2 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.437 [ 00:09:12.437 { 00:09:12.437 "name": "BaseBdev2", 00:09:12.437 "aliases": [ 00:09:12.437 "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852" 00:09:12.437 ], 00:09:12.437 "product_name": "Malloc disk", 00:09:12.437 "block_size": 512, 00:09:12.437 "num_blocks": 65536, 00:09:12.437 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:12.437 "assigned_rate_limits": { 00:09:12.437 "rw_ios_per_sec": 0, 00:09:12.437 "rw_mbytes_per_sec": 0, 00:09:12.437 "r_mbytes_per_sec": 0, 00:09:12.437 "w_mbytes_per_sec": 0 00:09:12.437 }, 00:09:12.437 "claimed": false, 00:09:12.437 "zoned": false, 00:09:12.437 "supported_io_types": { 00:09:12.437 "read": true, 00:09:12.437 "write": true, 00:09:12.437 "unmap": true, 00:09:12.437 "flush": true, 00:09:12.437 "reset": true, 00:09:12.437 "nvme_admin": false, 00:09:12.437 "nvme_io": false, 00:09:12.437 "nvme_io_md": false, 00:09:12.437 "write_zeroes": true, 00:09:12.437 "zcopy": true, 00:09:12.437 "get_zone_info": false, 00:09:12.437 "zone_management": false, 00:09:12.437 "zone_append": false, 00:09:12.437 "compare": false, 00:09:12.437 "compare_and_write": false, 00:09:12.437 "abort": true, 00:09:12.437 "seek_hole": false, 00:09:12.437 "seek_data": false, 00:09:12.437 "copy": true, 00:09:12.437 "nvme_iov_md": false 00:09:12.437 }, 00:09:12.437 "memory_domains": [ 00:09:12.437 { 00:09:12.437 "dma_device_id": "system", 00:09:12.437 "dma_device_type": 1 00:09:12.437 }, 00:09:12.437 { 00:09:12.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.437 "dma_device_type": 2 00:09:12.437 } 00:09:12.437 ], 00:09:12.437 "driver_specific": {} 00:09:12.437 } 00:09:12.437 ] 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.437 BaseBdev3 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.437 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.437 [ 00:09:12.437 { 00:09:12.437 "name": "BaseBdev3", 00:09:12.437 "aliases": [ 00:09:12.437 "aefa45c0-161e-4614-83da-ba78b8519651" 00:09:12.437 ], 00:09:12.437 "product_name": "Malloc disk", 00:09:12.437 "block_size": 512, 00:09:12.437 "num_blocks": 65536, 00:09:12.437 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:12.437 "assigned_rate_limits": { 00:09:12.437 "rw_ios_per_sec": 0, 00:09:12.437 "rw_mbytes_per_sec": 0, 00:09:12.437 "r_mbytes_per_sec": 0, 00:09:12.437 "w_mbytes_per_sec": 0 00:09:12.437 }, 00:09:12.438 "claimed": false, 00:09:12.438 "zoned": false, 00:09:12.438 "supported_io_types": { 00:09:12.438 "read": true, 00:09:12.438 "write": true, 00:09:12.438 "unmap": true, 00:09:12.438 "flush": true, 00:09:12.438 "reset": true, 00:09:12.438 "nvme_admin": false, 00:09:12.438 "nvme_io": false, 00:09:12.438 "nvme_io_md": false, 00:09:12.438 "write_zeroes": true, 00:09:12.438 "zcopy": true, 00:09:12.438 "get_zone_info": false, 00:09:12.438 "zone_management": false, 00:09:12.438 "zone_append": false, 00:09:12.438 "compare": false, 00:09:12.438 "compare_and_write": false, 00:09:12.438 "abort": true, 00:09:12.438 "seek_hole": false, 00:09:12.438 "seek_data": false, 00:09:12.438 "copy": true, 00:09:12.438 "nvme_iov_md": false 00:09:12.438 }, 00:09:12.438 "memory_domains": [ 00:09:12.438 { 00:09:12.438 "dma_device_id": "system", 00:09:12.438 "dma_device_type": 1 00:09:12.438 }, 00:09:12.438 { 00:09:12.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.438 "dma_device_type": 2 00:09:12.438 } 00:09:12.438 ], 00:09:12.438 "driver_specific": {} 00:09:12.438 } 00:09:12.438 ] 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.438 [2024-09-29 21:40:31.329790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.438 [2024-09-29 21:40:31.329905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.438 [2024-09-29 21:40:31.329946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.438 [2024-09-29 21:40:31.331993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.438 "name": "Existed_Raid", 00:09:12.438 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:12.438 "strip_size_kb": 64, 00:09:12.438 "state": "configuring", 00:09:12.438 "raid_level": "concat", 00:09:12.438 "superblock": true, 00:09:12.438 "num_base_bdevs": 3, 00:09:12.438 "num_base_bdevs_discovered": 2, 00:09:12.438 "num_base_bdevs_operational": 3, 00:09:12.438 "base_bdevs_list": [ 00:09:12.438 { 00:09:12.438 "name": "BaseBdev1", 00:09:12.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.438 "is_configured": false, 00:09:12.438 "data_offset": 0, 00:09:12.438 "data_size": 0 00:09:12.438 }, 00:09:12.438 { 00:09:12.438 "name": "BaseBdev2", 00:09:12.438 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:12.438 "is_configured": true, 00:09:12.438 "data_offset": 2048, 00:09:12.438 "data_size": 63488 00:09:12.438 }, 00:09:12.438 { 00:09:12.438 "name": "BaseBdev3", 00:09:12.438 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:12.438 "is_configured": true, 00:09:12.438 "data_offset": 2048, 00:09:12.438 "data_size": 63488 00:09:12.438 } 00:09:12.438 ] 00:09:12.438 }' 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.438 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.005 [2024-09-29 21:40:31.764993] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.005 "name": "Existed_Raid", 00:09:13.005 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:13.005 "strip_size_kb": 64, 00:09:13.005 "state": "configuring", 00:09:13.005 "raid_level": "concat", 00:09:13.005 "superblock": true, 00:09:13.005 "num_base_bdevs": 3, 00:09:13.005 "num_base_bdevs_discovered": 1, 00:09:13.005 "num_base_bdevs_operational": 3, 00:09:13.005 "base_bdevs_list": [ 00:09:13.005 { 00:09:13.005 "name": "BaseBdev1", 00:09:13.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.005 "is_configured": false, 00:09:13.005 "data_offset": 0, 00:09:13.005 "data_size": 0 00:09:13.005 }, 00:09:13.005 { 00:09:13.005 "name": null, 00:09:13.005 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:13.005 "is_configured": false, 00:09:13.005 "data_offset": 0, 00:09:13.005 "data_size": 63488 00:09:13.005 }, 00:09:13.005 { 00:09:13.005 "name": "BaseBdev3", 00:09:13.005 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:13.005 "is_configured": true, 00:09:13.005 "data_offset": 2048, 00:09:13.005 "data_size": 63488 00:09:13.005 } 00:09:13.005 ] 00:09:13.005 }' 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.005 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.265 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:13.265 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.265 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.265 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.265 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.265 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:13.265 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.265 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.265 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.524 [2024-09-29 21:40:32.254674] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.524 BaseBdev1 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.524 [ 00:09:13.524 { 00:09:13.524 "name": "BaseBdev1", 00:09:13.524 "aliases": [ 00:09:13.524 "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d" 00:09:13.524 ], 00:09:13.524 "product_name": "Malloc disk", 00:09:13.524 "block_size": 512, 00:09:13.524 "num_blocks": 65536, 00:09:13.524 "uuid": "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d", 00:09:13.524 "assigned_rate_limits": { 00:09:13.524 "rw_ios_per_sec": 0, 00:09:13.524 "rw_mbytes_per_sec": 0, 00:09:13.524 "r_mbytes_per_sec": 0, 00:09:13.524 "w_mbytes_per_sec": 0 00:09:13.524 }, 00:09:13.524 "claimed": true, 00:09:13.524 "claim_type": "exclusive_write", 00:09:13.524 "zoned": false, 00:09:13.524 "supported_io_types": { 00:09:13.524 "read": true, 00:09:13.524 "write": true, 00:09:13.524 "unmap": true, 00:09:13.524 "flush": true, 00:09:13.524 "reset": true, 00:09:13.524 "nvme_admin": false, 00:09:13.524 "nvme_io": false, 00:09:13.524 "nvme_io_md": false, 00:09:13.524 "write_zeroes": true, 00:09:13.524 "zcopy": true, 00:09:13.524 "get_zone_info": false, 00:09:13.524 "zone_management": false, 00:09:13.524 "zone_append": false, 00:09:13.524 "compare": false, 00:09:13.524 "compare_and_write": false, 00:09:13.524 "abort": true, 00:09:13.524 "seek_hole": false, 00:09:13.524 "seek_data": false, 00:09:13.524 "copy": true, 00:09:13.524 "nvme_iov_md": false 00:09:13.524 }, 00:09:13.524 "memory_domains": [ 00:09:13.524 { 00:09:13.524 "dma_device_id": "system", 00:09:13.524 "dma_device_type": 1 00:09:13.524 }, 00:09:13.524 { 00:09:13.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.524 "dma_device_type": 2 00:09:13.524 } 00:09:13.524 ], 00:09:13.524 "driver_specific": {} 00:09:13.524 } 00:09:13.524 ] 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.524 "name": "Existed_Raid", 00:09:13.524 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:13.524 "strip_size_kb": 64, 00:09:13.524 "state": "configuring", 00:09:13.524 "raid_level": "concat", 00:09:13.524 "superblock": true, 00:09:13.524 "num_base_bdevs": 3, 00:09:13.524 "num_base_bdevs_discovered": 2, 00:09:13.524 "num_base_bdevs_operational": 3, 00:09:13.524 "base_bdevs_list": [ 00:09:13.524 { 00:09:13.524 "name": "BaseBdev1", 00:09:13.524 "uuid": "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d", 00:09:13.524 "is_configured": true, 00:09:13.524 "data_offset": 2048, 00:09:13.524 "data_size": 63488 00:09:13.524 }, 00:09:13.524 { 00:09:13.524 "name": null, 00:09:13.524 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:13.524 "is_configured": false, 00:09:13.524 "data_offset": 0, 00:09:13.524 "data_size": 63488 00:09:13.524 }, 00:09:13.524 { 00:09:13.524 "name": "BaseBdev3", 00:09:13.524 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:13.524 "is_configured": true, 00:09:13.524 "data_offset": 2048, 00:09:13.524 "data_size": 63488 00:09:13.524 } 00:09:13.524 ] 00:09:13.524 }' 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.524 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.784 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:13.784 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.784 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.784 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.043 [2024-09-29 21:40:32.781818] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.043 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.043 "name": "Existed_Raid", 00:09:14.043 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:14.043 "strip_size_kb": 64, 00:09:14.043 "state": "configuring", 00:09:14.043 "raid_level": "concat", 00:09:14.043 "superblock": true, 00:09:14.043 "num_base_bdevs": 3, 00:09:14.043 "num_base_bdevs_discovered": 1, 00:09:14.043 "num_base_bdevs_operational": 3, 00:09:14.043 "base_bdevs_list": [ 00:09:14.043 { 00:09:14.043 "name": "BaseBdev1", 00:09:14.043 "uuid": "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d", 00:09:14.043 "is_configured": true, 00:09:14.043 "data_offset": 2048, 00:09:14.043 "data_size": 63488 00:09:14.043 }, 00:09:14.043 { 00:09:14.043 "name": null, 00:09:14.043 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:14.043 "is_configured": false, 00:09:14.043 "data_offset": 0, 00:09:14.043 "data_size": 63488 00:09:14.043 }, 00:09:14.043 { 00:09:14.043 "name": null, 00:09:14.043 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:14.043 "is_configured": false, 00:09:14.043 "data_offset": 0, 00:09:14.043 "data_size": 63488 00:09:14.044 } 00:09:14.044 ] 00:09:14.044 }' 00:09:14.044 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.044 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.301 [2024-09-29 21:40:33.225096] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.301 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.301 "name": "Existed_Raid", 00:09:14.301 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:14.301 "strip_size_kb": 64, 00:09:14.301 "state": "configuring", 00:09:14.301 "raid_level": "concat", 00:09:14.301 "superblock": true, 00:09:14.301 "num_base_bdevs": 3, 00:09:14.301 "num_base_bdevs_discovered": 2, 00:09:14.301 "num_base_bdevs_operational": 3, 00:09:14.301 "base_bdevs_list": [ 00:09:14.301 { 00:09:14.301 "name": "BaseBdev1", 00:09:14.301 "uuid": "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d", 00:09:14.301 "is_configured": true, 00:09:14.301 "data_offset": 2048, 00:09:14.301 "data_size": 63488 00:09:14.301 }, 00:09:14.301 { 00:09:14.301 "name": null, 00:09:14.301 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:14.301 "is_configured": false, 00:09:14.301 "data_offset": 0, 00:09:14.301 "data_size": 63488 00:09:14.301 }, 00:09:14.301 { 00:09:14.301 "name": "BaseBdev3", 00:09:14.301 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:14.301 "is_configured": true, 00:09:14.301 "data_offset": 2048, 00:09:14.301 "data_size": 63488 00:09:14.301 } 00:09:14.301 ] 00:09:14.301 }' 00:09:14.302 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.302 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.869 [2024-09-29 21:40:33.704328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.869 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.128 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.128 "name": "Existed_Raid", 00:09:15.128 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:15.128 "strip_size_kb": 64, 00:09:15.128 "state": "configuring", 00:09:15.128 "raid_level": "concat", 00:09:15.128 "superblock": true, 00:09:15.128 "num_base_bdevs": 3, 00:09:15.128 "num_base_bdevs_discovered": 1, 00:09:15.128 "num_base_bdevs_operational": 3, 00:09:15.128 "base_bdevs_list": [ 00:09:15.128 { 00:09:15.128 "name": null, 00:09:15.128 "uuid": "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d", 00:09:15.128 "is_configured": false, 00:09:15.128 "data_offset": 0, 00:09:15.128 "data_size": 63488 00:09:15.128 }, 00:09:15.128 { 00:09:15.128 "name": null, 00:09:15.128 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:15.128 "is_configured": false, 00:09:15.128 "data_offset": 0, 00:09:15.128 "data_size": 63488 00:09:15.128 }, 00:09:15.128 { 00:09:15.128 "name": "BaseBdev3", 00:09:15.128 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:15.128 "is_configured": true, 00:09:15.128 "data_offset": 2048, 00:09:15.128 "data_size": 63488 00:09:15.128 } 00:09:15.128 ] 00:09:15.128 }' 00:09:15.128 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.128 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.387 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.387 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.387 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.387 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:15.387 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.387 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.387 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.388 [2024-09-29 21:40:34.286759] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.388 "name": "Existed_Raid", 00:09:15.388 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:15.388 "strip_size_kb": 64, 00:09:15.388 "state": "configuring", 00:09:15.388 "raid_level": "concat", 00:09:15.388 "superblock": true, 00:09:15.388 "num_base_bdevs": 3, 00:09:15.388 "num_base_bdevs_discovered": 2, 00:09:15.388 "num_base_bdevs_operational": 3, 00:09:15.388 "base_bdevs_list": [ 00:09:15.388 { 00:09:15.388 "name": null, 00:09:15.388 "uuid": "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d", 00:09:15.388 "is_configured": false, 00:09:15.388 "data_offset": 0, 00:09:15.388 "data_size": 63488 00:09:15.388 }, 00:09:15.388 { 00:09:15.388 "name": "BaseBdev2", 00:09:15.388 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:15.388 "is_configured": true, 00:09:15.388 "data_offset": 2048, 00:09:15.388 "data_size": 63488 00:09:15.388 }, 00:09:15.388 { 00:09:15.388 "name": "BaseBdev3", 00:09:15.388 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:15.388 "is_configured": true, 00:09:15.388 "data_offset": 2048, 00:09:15.388 "data_size": 63488 00:09:15.388 } 00:09:15.388 ] 00:09:15.388 }' 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.388 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.957 [2024-09-29 21:40:34.826793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:15.957 [2024-09-29 21:40:34.827179] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:15.957 [2024-09-29 21:40:34.827238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:15.957 [2024-09-29 21:40:34.827576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:15.957 [2024-09-29 21:40:34.827763] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:15.957 [2024-09-29 21:40:34.827801] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:15.957 NewBaseBdev 00:09:15.957 [2024-09-29 21:40:34.827988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.957 [ 00:09:15.957 { 00:09:15.957 "name": "NewBaseBdev", 00:09:15.957 "aliases": [ 00:09:15.957 "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d" 00:09:15.957 ], 00:09:15.957 "product_name": "Malloc disk", 00:09:15.957 "block_size": 512, 00:09:15.957 "num_blocks": 65536, 00:09:15.957 "uuid": "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d", 00:09:15.957 "assigned_rate_limits": { 00:09:15.957 "rw_ios_per_sec": 0, 00:09:15.957 "rw_mbytes_per_sec": 0, 00:09:15.957 "r_mbytes_per_sec": 0, 00:09:15.957 "w_mbytes_per_sec": 0 00:09:15.957 }, 00:09:15.957 "claimed": true, 00:09:15.957 "claim_type": "exclusive_write", 00:09:15.957 "zoned": false, 00:09:15.957 "supported_io_types": { 00:09:15.957 "read": true, 00:09:15.957 "write": true, 00:09:15.957 "unmap": true, 00:09:15.957 "flush": true, 00:09:15.957 "reset": true, 00:09:15.957 "nvme_admin": false, 00:09:15.957 "nvme_io": false, 00:09:15.957 "nvme_io_md": false, 00:09:15.957 "write_zeroes": true, 00:09:15.957 "zcopy": true, 00:09:15.957 "get_zone_info": false, 00:09:15.957 "zone_management": false, 00:09:15.957 "zone_append": false, 00:09:15.957 "compare": false, 00:09:15.957 "compare_and_write": false, 00:09:15.957 "abort": true, 00:09:15.957 "seek_hole": false, 00:09:15.957 "seek_data": false, 00:09:15.957 "copy": true, 00:09:15.957 "nvme_iov_md": false 00:09:15.957 }, 00:09:15.957 "memory_domains": [ 00:09:15.957 { 00:09:15.957 "dma_device_id": "system", 00:09:15.957 "dma_device_type": 1 00:09:15.957 }, 00:09:15.957 { 00:09:15.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.957 "dma_device_type": 2 00:09:15.957 } 00:09:15.957 ], 00:09:15.957 "driver_specific": {} 00:09:15.957 } 00:09:15.957 ] 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.957 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.957 "name": "Existed_Raid", 00:09:15.957 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:15.957 "strip_size_kb": 64, 00:09:15.957 "state": "online", 00:09:15.957 "raid_level": "concat", 00:09:15.957 "superblock": true, 00:09:15.957 "num_base_bdevs": 3, 00:09:15.957 "num_base_bdevs_discovered": 3, 00:09:15.957 "num_base_bdevs_operational": 3, 00:09:15.957 "base_bdevs_list": [ 00:09:15.957 { 00:09:15.957 "name": "NewBaseBdev", 00:09:15.957 "uuid": "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d", 00:09:15.957 "is_configured": true, 00:09:15.957 "data_offset": 2048, 00:09:15.957 "data_size": 63488 00:09:15.957 }, 00:09:15.957 { 00:09:15.958 "name": "BaseBdev2", 00:09:15.958 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:15.958 "is_configured": true, 00:09:15.958 "data_offset": 2048, 00:09:15.958 "data_size": 63488 00:09:15.958 }, 00:09:15.958 { 00:09:15.958 "name": "BaseBdev3", 00:09:15.958 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:15.958 "is_configured": true, 00:09:15.958 "data_offset": 2048, 00:09:15.958 "data_size": 63488 00:09:15.958 } 00:09:15.958 ] 00:09:15.958 }' 00:09:15.958 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.958 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.527 [2024-09-29 21:40:35.342227] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.527 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.527 "name": "Existed_Raid", 00:09:16.527 "aliases": [ 00:09:16.527 "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a" 00:09:16.527 ], 00:09:16.527 "product_name": "Raid Volume", 00:09:16.527 "block_size": 512, 00:09:16.527 "num_blocks": 190464, 00:09:16.527 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:16.527 "assigned_rate_limits": { 00:09:16.527 "rw_ios_per_sec": 0, 00:09:16.527 "rw_mbytes_per_sec": 0, 00:09:16.527 "r_mbytes_per_sec": 0, 00:09:16.527 "w_mbytes_per_sec": 0 00:09:16.527 }, 00:09:16.527 "claimed": false, 00:09:16.527 "zoned": false, 00:09:16.527 "supported_io_types": { 00:09:16.527 "read": true, 00:09:16.527 "write": true, 00:09:16.527 "unmap": true, 00:09:16.527 "flush": true, 00:09:16.527 "reset": true, 00:09:16.527 "nvme_admin": false, 00:09:16.527 "nvme_io": false, 00:09:16.527 "nvme_io_md": false, 00:09:16.527 "write_zeroes": true, 00:09:16.527 "zcopy": false, 00:09:16.527 "get_zone_info": false, 00:09:16.527 "zone_management": false, 00:09:16.527 "zone_append": false, 00:09:16.527 "compare": false, 00:09:16.527 "compare_and_write": false, 00:09:16.527 "abort": false, 00:09:16.527 "seek_hole": false, 00:09:16.527 "seek_data": false, 00:09:16.527 "copy": false, 00:09:16.527 "nvme_iov_md": false 00:09:16.527 }, 00:09:16.527 "memory_domains": [ 00:09:16.527 { 00:09:16.527 "dma_device_id": "system", 00:09:16.527 "dma_device_type": 1 00:09:16.527 }, 00:09:16.527 { 00:09:16.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.527 "dma_device_type": 2 00:09:16.527 }, 00:09:16.527 { 00:09:16.527 "dma_device_id": "system", 00:09:16.527 "dma_device_type": 1 00:09:16.527 }, 00:09:16.527 { 00:09:16.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.527 "dma_device_type": 2 00:09:16.527 }, 00:09:16.527 { 00:09:16.527 "dma_device_id": "system", 00:09:16.527 "dma_device_type": 1 00:09:16.527 }, 00:09:16.527 { 00:09:16.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.527 "dma_device_type": 2 00:09:16.527 } 00:09:16.527 ], 00:09:16.527 "driver_specific": { 00:09:16.527 "raid": { 00:09:16.527 "uuid": "7b83f714-a6c1-48ce-8ad3-2410b9b0a29a", 00:09:16.527 "strip_size_kb": 64, 00:09:16.528 "state": "online", 00:09:16.528 "raid_level": "concat", 00:09:16.528 "superblock": true, 00:09:16.528 "num_base_bdevs": 3, 00:09:16.528 "num_base_bdevs_discovered": 3, 00:09:16.528 "num_base_bdevs_operational": 3, 00:09:16.528 "base_bdevs_list": [ 00:09:16.528 { 00:09:16.528 "name": "NewBaseBdev", 00:09:16.528 "uuid": "b31bc3d9-6df5-4c5a-8b21-1af3097d5b2d", 00:09:16.528 "is_configured": true, 00:09:16.528 "data_offset": 2048, 00:09:16.528 "data_size": 63488 00:09:16.528 }, 00:09:16.528 { 00:09:16.528 "name": "BaseBdev2", 00:09:16.528 "uuid": "1ed6ffd7-ccb4-40cf-b2ad-50dcb497d852", 00:09:16.528 "is_configured": true, 00:09:16.528 "data_offset": 2048, 00:09:16.528 "data_size": 63488 00:09:16.528 }, 00:09:16.528 { 00:09:16.528 "name": "BaseBdev3", 00:09:16.528 "uuid": "aefa45c0-161e-4614-83da-ba78b8519651", 00:09:16.528 "is_configured": true, 00:09:16.528 "data_offset": 2048, 00:09:16.528 "data_size": 63488 00:09:16.528 } 00:09:16.528 ] 00:09:16.528 } 00:09:16.528 } 00:09:16.528 }' 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:16.528 BaseBdev2 00:09:16.528 BaseBdev3' 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.528 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.788 [2024-09-29 21:40:35.601468] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:16.788 [2024-09-29 21:40:35.601536] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.788 [2024-09-29 21:40:35.601651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.788 [2024-09-29 21:40:35.601737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.788 [2024-09-29 21:40:35.601780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66295 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66295 ']' 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66295 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66295 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66295' 00:09:16.788 killing process with pid 66295 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66295 00:09:16.788 [2024-09-29 21:40:35.649221] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.788 21:40:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66295 00:09:17.054 [2024-09-29 21:40:35.961737] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.438 21:40:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.438 00:09:18.438 real 0m10.681s 00:09:18.438 user 0m16.624s 00:09:18.438 sys 0m2.033s 00:09:18.438 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.438 21:40:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.438 ************************************ 00:09:18.438 END TEST raid_state_function_test_sb 00:09:18.438 ************************************ 00:09:18.439 21:40:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:18.439 21:40:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:18.439 21:40:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.439 21:40:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.439 ************************************ 00:09:18.439 START TEST raid_superblock_test 00:09:18.439 ************************************ 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66923 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66923 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66923 ']' 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.439 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.699 [2024-09-29 21:40:37.454801] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:18.699 [2024-09-29 21:40:37.455001] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66923 ] 00:09:18.699 [2024-09-29 21:40:37.620332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.958 [2024-09-29 21:40:37.860960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.218 [2024-09-29 21:40:38.092598] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.218 [2024-09-29 21:40:38.092740] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.478 malloc1 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.478 [2024-09-29 21:40:38.333535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.478 [2024-09-29 21:40:38.333640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.478 [2024-09-29 21:40:38.333682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:19.478 [2024-09-29 21:40:38.333732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.478 [2024-09-29 21:40:38.336024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.478 [2024-09-29 21:40:38.336118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.478 pt1 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.478 malloc2 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.478 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.479 [2024-09-29 21:40:38.422515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.479 [2024-09-29 21:40:38.422610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.479 [2024-09-29 21:40:38.422650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:19.479 [2024-09-29 21:40:38.422677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.479 [2024-09-29 21:40:38.425014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.479 [2024-09-29 21:40:38.425099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.479 pt2 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.479 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.738 malloc3 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.738 [2024-09-29 21:40:38.486170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:19.738 [2024-09-29 21:40:38.486258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.738 [2024-09-29 21:40:38.486299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:19.738 [2024-09-29 21:40:38.486327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.738 [2024-09-29 21:40:38.488713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.738 [2024-09-29 21:40:38.488800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:19.738 pt3 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.738 [2024-09-29 21:40:38.498240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.738 [2024-09-29 21:40:38.500277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.738 [2024-09-29 21:40:38.500341] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:19.738 [2024-09-29 21:40:38.500499] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:19.738 [2024-09-29 21:40:38.500512] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:19.738 [2024-09-29 21:40:38.500742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.738 [2024-09-29 21:40:38.500897] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:19.738 [2024-09-29 21:40:38.500907] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:19.738 [2024-09-29 21:40:38.501070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.738 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.738 "name": "raid_bdev1", 00:09:19.738 "uuid": "104b5b06-ca4c-499e-95ff-5ef6967572bb", 00:09:19.738 "strip_size_kb": 64, 00:09:19.738 "state": "online", 00:09:19.738 "raid_level": "concat", 00:09:19.738 "superblock": true, 00:09:19.738 "num_base_bdevs": 3, 00:09:19.738 "num_base_bdevs_discovered": 3, 00:09:19.738 "num_base_bdevs_operational": 3, 00:09:19.738 "base_bdevs_list": [ 00:09:19.738 { 00:09:19.738 "name": "pt1", 00:09:19.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.738 "is_configured": true, 00:09:19.738 "data_offset": 2048, 00:09:19.738 "data_size": 63488 00:09:19.738 }, 00:09:19.738 { 00:09:19.738 "name": "pt2", 00:09:19.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.738 "is_configured": true, 00:09:19.738 "data_offset": 2048, 00:09:19.738 "data_size": 63488 00:09:19.738 }, 00:09:19.739 { 00:09:19.739 "name": "pt3", 00:09:19.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:19.739 "is_configured": true, 00:09:19.739 "data_offset": 2048, 00:09:19.739 "data_size": 63488 00:09:19.739 } 00:09:19.739 ] 00:09:19.739 }' 00:09:19.739 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.739 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:19.997 [2024-09-29 21:40:38.953697] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.997 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.264 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.264 "name": "raid_bdev1", 00:09:20.264 "aliases": [ 00:09:20.264 "104b5b06-ca4c-499e-95ff-5ef6967572bb" 00:09:20.264 ], 00:09:20.264 "product_name": "Raid Volume", 00:09:20.264 "block_size": 512, 00:09:20.264 "num_blocks": 190464, 00:09:20.264 "uuid": "104b5b06-ca4c-499e-95ff-5ef6967572bb", 00:09:20.264 "assigned_rate_limits": { 00:09:20.264 "rw_ios_per_sec": 0, 00:09:20.264 "rw_mbytes_per_sec": 0, 00:09:20.264 "r_mbytes_per_sec": 0, 00:09:20.264 "w_mbytes_per_sec": 0 00:09:20.264 }, 00:09:20.264 "claimed": false, 00:09:20.264 "zoned": false, 00:09:20.264 "supported_io_types": { 00:09:20.264 "read": true, 00:09:20.264 "write": true, 00:09:20.264 "unmap": true, 00:09:20.264 "flush": true, 00:09:20.264 "reset": true, 00:09:20.264 "nvme_admin": false, 00:09:20.264 "nvme_io": false, 00:09:20.264 "nvme_io_md": false, 00:09:20.264 "write_zeroes": true, 00:09:20.264 "zcopy": false, 00:09:20.264 "get_zone_info": false, 00:09:20.264 "zone_management": false, 00:09:20.264 "zone_append": false, 00:09:20.264 "compare": false, 00:09:20.264 "compare_and_write": false, 00:09:20.264 "abort": false, 00:09:20.264 "seek_hole": false, 00:09:20.264 "seek_data": false, 00:09:20.264 "copy": false, 00:09:20.264 "nvme_iov_md": false 00:09:20.264 }, 00:09:20.264 "memory_domains": [ 00:09:20.264 { 00:09:20.264 "dma_device_id": "system", 00:09:20.264 "dma_device_type": 1 00:09:20.264 }, 00:09:20.264 { 00:09:20.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.264 "dma_device_type": 2 00:09:20.264 }, 00:09:20.264 { 00:09:20.264 "dma_device_id": "system", 00:09:20.264 "dma_device_type": 1 00:09:20.264 }, 00:09:20.264 { 00:09:20.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.264 "dma_device_type": 2 00:09:20.264 }, 00:09:20.264 { 00:09:20.264 "dma_device_id": "system", 00:09:20.264 "dma_device_type": 1 00:09:20.264 }, 00:09:20.264 { 00:09:20.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.264 "dma_device_type": 2 00:09:20.264 } 00:09:20.264 ], 00:09:20.264 "driver_specific": { 00:09:20.264 "raid": { 00:09:20.264 "uuid": "104b5b06-ca4c-499e-95ff-5ef6967572bb", 00:09:20.264 "strip_size_kb": 64, 00:09:20.264 "state": "online", 00:09:20.264 "raid_level": "concat", 00:09:20.264 "superblock": true, 00:09:20.264 "num_base_bdevs": 3, 00:09:20.264 "num_base_bdevs_discovered": 3, 00:09:20.264 "num_base_bdevs_operational": 3, 00:09:20.264 "base_bdevs_list": [ 00:09:20.264 { 00:09:20.264 "name": "pt1", 00:09:20.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.264 "is_configured": true, 00:09:20.264 "data_offset": 2048, 00:09:20.264 "data_size": 63488 00:09:20.264 }, 00:09:20.264 { 00:09:20.264 "name": "pt2", 00:09:20.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.264 "is_configured": true, 00:09:20.264 "data_offset": 2048, 00:09:20.264 "data_size": 63488 00:09:20.264 }, 00:09:20.264 { 00:09:20.264 "name": "pt3", 00:09:20.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.264 "is_configured": true, 00:09:20.264 "data_offset": 2048, 00:09:20.264 "data_size": 63488 00:09:20.264 } 00:09:20.264 ] 00:09:20.264 } 00:09:20.264 } 00:09:20.264 }' 00:09:20.264 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.264 pt2 00:09:20.264 pt3' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.264 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.264 [2024-09-29 21:40:39.229177] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=104b5b06-ca4c-499e-95ff-5ef6967572bb 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 104b5b06-ca4c-499e-95ff-5ef6967572bb ']' 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 [2024-09-29 21:40:39.272830] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.524 [2024-09-29 21:40:39.272897] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.524 [2024-09-29 21:40:39.272983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.524 [2024-09-29 21:40:39.273063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.524 [2024-09-29 21:40:39.273120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.524 [2024-09-29 21:40:39.416628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:20.524 [2024-09-29 21:40:39.418744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:20.524 [2024-09-29 21:40:39.418788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:20.524 [2024-09-29 21:40:39.418832] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:20.524 [2024-09-29 21:40:39.418877] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:20.524 [2024-09-29 21:40:39.418895] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:20.524 [2024-09-29 21:40:39.418911] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.524 [2024-09-29 21:40:39.418920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:20.524 request: 00:09:20.524 { 00:09:20.524 "name": "raid_bdev1", 00:09:20.524 "raid_level": "concat", 00:09:20.524 "base_bdevs": [ 00:09:20.524 "malloc1", 00:09:20.524 "malloc2", 00:09:20.524 "malloc3" 00:09:20.524 ], 00:09:20.524 "strip_size_kb": 64, 00:09:20.524 "superblock": false, 00:09:20.524 "method": "bdev_raid_create", 00:09:20.524 "req_id": 1 00:09:20.524 } 00:09:20.524 Got JSON-RPC error response 00:09:20.524 response: 00:09:20.524 { 00:09:20.524 "code": -17, 00:09:20.524 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:20.524 } 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:20.524 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.525 [2024-09-29 21:40:39.464498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.525 [2024-09-29 21:40:39.464597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.525 [2024-09-29 21:40:39.464632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:20.525 [2024-09-29 21:40:39.464659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.525 [2024-09-29 21:40:39.467042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.525 [2024-09-29 21:40:39.467121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.525 [2024-09-29 21:40:39.467206] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:20.525 [2024-09-29 21:40:39.467280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.525 pt1 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.525 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.784 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.784 "name": "raid_bdev1", 00:09:20.785 "uuid": "104b5b06-ca4c-499e-95ff-5ef6967572bb", 00:09:20.785 "strip_size_kb": 64, 00:09:20.785 "state": "configuring", 00:09:20.785 "raid_level": "concat", 00:09:20.785 "superblock": true, 00:09:20.785 "num_base_bdevs": 3, 00:09:20.785 "num_base_bdevs_discovered": 1, 00:09:20.785 "num_base_bdevs_operational": 3, 00:09:20.785 "base_bdevs_list": [ 00:09:20.785 { 00:09:20.785 "name": "pt1", 00:09:20.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.785 "is_configured": true, 00:09:20.785 "data_offset": 2048, 00:09:20.785 "data_size": 63488 00:09:20.785 }, 00:09:20.785 { 00:09:20.785 "name": null, 00:09:20.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.785 "is_configured": false, 00:09:20.785 "data_offset": 2048, 00:09:20.785 "data_size": 63488 00:09:20.785 }, 00:09:20.785 { 00:09:20.785 "name": null, 00:09:20.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.785 "is_configured": false, 00:09:20.785 "data_offset": 2048, 00:09:20.785 "data_size": 63488 00:09:20.785 } 00:09:20.785 ] 00:09:20.785 }' 00:09:20.785 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.785 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.044 [2024-09-29 21:40:39.923724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.044 [2024-09-29 21:40:39.923781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.044 [2024-09-29 21:40:39.923804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:21.044 [2024-09-29 21:40:39.923813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.044 [2024-09-29 21:40:39.924260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.044 [2024-09-29 21:40:39.924278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.044 [2024-09-29 21:40:39.924352] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:21.044 [2024-09-29 21:40:39.924377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.044 pt2 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.044 [2024-09-29 21:40:39.935727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.044 "name": "raid_bdev1", 00:09:21.044 "uuid": "104b5b06-ca4c-499e-95ff-5ef6967572bb", 00:09:21.044 "strip_size_kb": 64, 00:09:21.044 "state": "configuring", 00:09:21.044 "raid_level": "concat", 00:09:21.044 "superblock": true, 00:09:21.044 "num_base_bdevs": 3, 00:09:21.044 "num_base_bdevs_discovered": 1, 00:09:21.044 "num_base_bdevs_operational": 3, 00:09:21.044 "base_bdevs_list": [ 00:09:21.044 { 00:09:21.044 "name": "pt1", 00:09:21.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.044 "is_configured": true, 00:09:21.044 "data_offset": 2048, 00:09:21.044 "data_size": 63488 00:09:21.044 }, 00:09:21.044 { 00:09:21.044 "name": null, 00:09:21.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.044 "is_configured": false, 00:09:21.044 "data_offset": 0, 00:09:21.044 "data_size": 63488 00:09:21.044 }, 00:09:21.044 { 00:09:21.044 "name": null, 00:09:21.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.044 "is_configured": false, 00:09:21.044 "data_offset": 2048, 00:09:21.044 "data_size": 63488 00:09:21.044 } 00:09:21.044 ] 00:09:21.044 }' 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.044 21:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.612 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:21.612 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.612 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.612 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.612 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.612 [2024-09-29 21:40:40.382903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.612 [2024-09-29 21:40:40.383004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.612 [2024-09-29 21:40:40.383046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:21.612 [2024-09-29 21:40:40.383078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.612 [2024-09-29 21:40:40.383527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.613 [2024-09-29 21:40:40.383586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.613 [2024-09-29 21:40:40.383680] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:21.613 [2024-09-29 21:40:40.383746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.613 pt2 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.613 [2024-09-29 21:40:40.394903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:21.613 [2024-09-29 21:40:40.394949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.613 [2024-09-29 21:40:40.394961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:21.613 [2024-09-29 21:40:40.394971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.613 [2024-09-29 21:40:40.395333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.613 [2024-09-29 21:40:40.395357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:21.613 [2024-09-29 21:40:40.395412] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:21.613 [2024-09-29 21:40:40.395430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:21.613 [2024-09-29 21:40:40.395541] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.613 [2024-09-29 21:40:40.395552] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:21.613 [2024-09-29 21:40:40.395807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:21.613 [2024-09-29 21:40:40.395948] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.613 [2024-09-29 21:40:40.395964] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:21.613 [2024-09-29 21:40:40.396136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.613 pt3 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.613 "name": "raid_bdev1", 00:09:21.613 "uuid": "104b5b06-ca4c-499e-95ff-5ef6967572bb", 00:09:21.613 "strip_size_kb": 64, 00:09:21.613 "state": "online", 00:09:21.613 "raid_level": "concat", 00:09:21.613 "superblock": true, 00:09:21.613 "num_base_bdevs": 3, 00:09:21.613 "num_base_bdevs_discovered": 3, 00:09:21.613 "num_base_bdevs_operational": 3, 00:09:21.613 "base_bdevs_list": [ 00:09:21.613 { 00:09:21.613 "name": "pt1", 00:09:21.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.613 "is_configured": true, 00:09:21.613 "data_offset": 2048, 00:09:21.613 "data_size": 63488 00:09:21.613 }, 00:09:21.613 { 00:09:21.613 "name": "pt2", 00:09:21.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.613 "is_configured": true, 00:09:21.613 "data_offset": 2048, 00:09:21.613 "data_size": 63488 00:09:21.613 }, 00:09:21.613 { 00:09:21.613 "name": "pt3", 00:09:21.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.613 "is_configured": true, 00:09:21.613 "data_offset": 2048, 00:09:21.613 "data_size": 63488 00:09:21.613 } 00:09:21.613 ] 00:09:21.613 }' 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.613 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.873 [2024-09-29 21:40:40.830432] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.873 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.133 "name": "raid_bdev1", 00:09:22.133 "aliases": [ 00:09:22.133 "104b5b06-ca4c-499e-95ff-5ef6967572bb" 00:09:22.133 ], 00:09:22.133 "product_name": "Raid Volume", 00:09:22.133 "block_size": 512, 00:09:22.133 "num_blocks": 190464, 00:09:22.133 "uuid": "104b5b06-ca4c-499e-95ff-5ef6967572bb", 00:09:22.133 "assigned_rate_limits": { 00:09:22.133 "rw_ios_per_sec": 0, 00:09:22.133 "rw_mbytes_per_sec": 0, 00:09:22.133 "r_mbytes_per_sec": 0, 00:09:22.133 "w_mbytes_per_sec": 0 00:09:22.133 }, 00:09:22.133 "claimed": false, 00:09:22.133 "zoned": false, 00:09:22.133 "supported_io_types": { 00:09:22.133 "read": true, 00:09:22.133 "write": true, 00:09:22.133 "unmap": true, 00:09:22.133 "flush": true, 00:09:22.133 "reset": true, 00:09:22.133 "nvme_admin": false, 00:09:22.133 "nvme_io": false, 00:09:22.133 "nvme_io_md": false, 00:09:22.133 "write_zeroes": true, 00:09:22.133 "zcopy": false, 00:09:22.133 "get_zone_info": false, 00:09:22.133 "zone_management": false, 00:09:22.133 "zone_append": false, 00:09:22.133 "compare": false, 00:09:22.133 "compare_and_write": false, 00:09:22.133 "abort": false, 00:09:22.133 "seek_hole": false, 00:09:22.133 "seek_data": false, 00:09:22.133 "copy": false, 00:09:22.133 "nvme_iov_md": false 00:09:22.133 }, 00:09:22.133 "memory_domains": [ 00:09:22.133 { 00:09:22.133 "dma_device_id": "system", 00:09:22.133 "dma_device_type": 1 00:09:22.133 }, 00:09:22.133 { 00:09:22.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.133 "dma_device_type": 2 00:09:22.133 }, 00:09:22.133 { 00:09:22.133 "dma_device_id": "system", 00:09:22.133 "dma_device_type": 1 00:09:22.133 }, 00:09:22.133 { 00:09:22.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.133 "dma_device_type": 2 00:09:22.133 }, 00:09:22.133 { 00:09:22.133 "dma_device_id": "system", 00:09:22.133 "dma_device_type": 1 00:09:22.133 }, 00:09:22.133 { 00:09:22.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.133 "dma_device_type": 2 00:09:22.133 } 00:09:22.133 ], 00:09:22.133 "driver_specific": { 00:09:22.133 "raid": { 00:09:22.133 "uuid": "104b5b06-ca4c-499e-95ff-5ef6967572bb", 00:09:22.133 "strip_size_kb": 64, 00:09:22.133 "state": "online", 00:09:22.133 "raid_level": "concat", 00:09:22.133 "superblock": true, 00:09:22.133 "num_base_bdevs": 3, 00:09:22.133 "num_base_bdevs_discovered": 3, 00:09:22.133 "num_base_bdevs_operational": 3, 00:09:22.133 "base_bdevs_list": [ 00:09:22.133 { 00:09:22.133 "name": "pt1", 00:09:22.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.133 "is_configured": true, 00:09:22.133 "data_offset": 2048, 00:09:22.133 "data_size": 63488 00:09:22.133 }, 00:09:22.133 { 00:09:22.133 "name": "pt2", 00:09:22.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.133 "is_configured": true, 00:09:22.133 "data_offset": 2048, 00:09:22.133 "data_size": 63488 00:09:22.133 }, 00:09:22.133 { 00:09:22.133 "name": "pt3", 00:09:22.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.133 "is_configured": true, 00:09:22.133 "data_offset": 2048, 00:09:22.133 "data_size": 63488 00:09:22.133 } 00:09:22.133 ] 00:09:22.133 } 00:09:22.133 } 00:09:22.133 }' 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:22.133 pt2 00:09:22.133 pt3' 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.133 21:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.133 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:22.394 [2024-09-29 21:40:41.129849] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 104b5b06-ca4c-499e-95ff-5ef6967572bb '!=' 104b5b06-ca4c-499e-95ff-5ef6967572bb ']' 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66923 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66923 ']' 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66923 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66923 00:09:22.394 killing process with pid 66923 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66923' 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66923 00:09:22.394 [2024-09-29 21:40:41.207292] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.394 [2024-09-29 21:40:41.207378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.394 [2024-09-29 21:40:41.207435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.394 [2024-09-29 21:40:41.207449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:22.394 21:40:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66923 00:09:22.654 [2024-09-29 21:40:41.529693] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.043 21:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:24.043 00:09:24.043 real 0m5.503s 00:09:24.043 user 0m7.619s 00:09:24.043 sys 0m1.035s 00:09:24.043 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.043 ************************************ 00:09:24.043 END TEST raid_superblock_test 00:09:24.043 ************************************ 00:09:24.043 21:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.043 21:40:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:24.043 21:40:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:24.043 21:40:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.043 21:40:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.043 ************************************ 00:09:24.043 START TEST raid_read_error_test 00:09:24.043 ************************************ 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ewo1kgyyt8 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67181 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67181 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67181 ']' 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.043 21:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.323 [2024-09-29 21:40:43.055302] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:24.323 [2024-09-29 21:40:43.055437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67181 ] 00:09:24.323 [2024-09-29 21:40:43.225152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.593 [2024-09-29 21:40:43.469651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.853 [2024-09-29 21:40:43.703505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.853 [2024-09-29 21:40:43.703542] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.113 BaseBdev1_malloc 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.113 true 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.113 [2024-09-29 21:40:43.935896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:25.113 [2024-09-29 21:40:43.935966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.113 [2024-09-29 21:40:43.935983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:25.113 [2024-09-29 21:40:43.935995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.113 [2024-09-29 21:40:43.938444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.113 [2024-09-29 21:40:43.938485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:25.113 BaseBdev1 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.113 21:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.113 BaseBdev2_malloc 00:09:25.113 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.113 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:25.113 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.113 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.113 true 00:09:25.113 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.113 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:25.113 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.113 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.113 [2024-09-29 21:40:44.037671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:25.113 [2024-09-29 21:40:44.037727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.113 [2024-09-29 21:40:44.037743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:25.114 [2024-09-29 21:40:44.037754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.114 [2024-09-29 21:40:44.040080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.114 [2024-09-29 21:40:44.040183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:25.114 BaseBdev2 00:09:25.114 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.114 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:25.114 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:25.114 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.114 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.114 BaseBdev3_malloc 00:09:25.114 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.114 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:25.114 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.114 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.373 true 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.373 [2024-09-29 21:40:44.108832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:25.373 [2024-09-29 21:40:44.108890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.373 [2024-09-29 21:40:44.108908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:25.373 [2024-09-29 21:40:44.108920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.373 [2024-09-29 21:40:44.111275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.373 [2024-09-29 21:40:44.111311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:25.373 BaseBdev3 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.373 [2024-09-29 21:40:44.120898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:25.373 [2024-09-29 21:40:44.122971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.373 [2024-09-29 21:40:44.123068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.373 [2024-09-29 21:40:44.123287] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:25.373 [2024-09-29 21:40:44.123300] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:25.373 [2024-09-29 21:40:44.123540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:25.373 [2024-09-29 21:40:44.123726] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:25.373 [2024-09-29 21:40:44.123739] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:25.373 [2024-09-29 21:40:44.123896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.373 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.374 "name": "raid_bdev1", 00:09:25.374 "uuid": "9997c7cd-c2de-46c6-9801-b8b45b7ccc7a", 00:09:25.374 "strip_size_kb": 64, 00:09:25.374 "state": "online", 00:09:25.374 "raid_level": "concat", 00:09:25.374 "superblock": true, 00:09:25.374 "num_base_bdevs": 3, 00:09:25.374 "num_base_bdevs_discovered": 3, 00:09:25.374 "num_base_bdevs_operational": 3, 00:09:25.374 "base_bdevs_list": [ 00:09:25.374 { 00:09:25.374 "name": "BaseBdev1", 00:09:25.374 "uuid": "5007307d-47c0-5006-b61e-3f9cccef52cb", 00:09:25.374 "is_configured": true, 00:09:25.374 "data_offset": 2048, 00:09:25.374 "data_size": 63488 00:09:25.374 }, 00:09:25.374 { 00:09:25.374 "name": "BaseBdev2", 00:09:25.374 "uuid": "1b4a45cf-5b6a-5bee-8e2d-e86816b8835c", 00:09:25.374 "is_configured": true, 00:09:25.374 "data_offset": 2048, 00:09:25.374 "data_size": 63488 00:09:25.374 }, 00:09:25.374 { 00:09:25.374 "name": "BaseBdev3", 00:09:25.374 "uuid": "b3a75487-125b-52e1-b55b-808fe423503a", 00:09:25.374 "is_configured": true, 00:09:25.374 "data_offset": 2048, 00:09:25.374 "data_size": 63488 00:09:25.374 } 00:09:25.374 ] 00:09:25.374 }' 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.374 21:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.633 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:25.633 21:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:25.893 [2024-09-29 21:40:44.617519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.834 "name": "raid_bdev1", 00:09:26.834 "uuid": "9997c7cd-c2de-46c6-9801-b8b45b7ccc7a", 00:09:26.834 "strip_size_kb": 64, 00:09:26.834 "state": "online", 00:09:26.834 "raid_level": "concat", 00:09:26.834 "superblock": true, 00:09:26.834 "num_base_bdevs": 3, 00:09:26.834 "num_base_bdevs_discovered": 3, 00:09:26.834 "num_base_bdevs_operational": 3, 00:09:26.834 "base_bdevs_list": [ 00:09:26.834 { 00:09:26.834 "name": "BaseBdev1", 00:09:26.834 "uuid": "5007307d-47c0-5006-b61e-3f9cccef52cb", 00:09:26.834 "is_configured": true, 00:09:26.834 "data_offset": 2048, 00:09:26.834 "data_size": 63488 00:09:26.834 }, 00:09:26.834 { 00:09:26.834 "name": "BaseBdev2", 00:09:26.834 "uuid": "1b4a45cf-5b6a-5bee-8e2d-e86816b8835c", 00:09:26.834 "is_configured": true, 00:09:26.834 "data_offset": 2048, 00:09:26.834 "data_size": 63488 00:09:26.834 }, 00:09:26.834 { 00:09:26.834 "name": "BaseBdev3", 00:09:26.834 "uuid": "b3a75487-125b-52e1-b55b-808fe423503a", 00:09:26.834 "is_configured": true, 00:09:26.834 "data_offset": 2048, 00:09:26.834 "data_size": 63488 00:09:26.834 } 00:09:26.834 ] 00:09:26.834 }' 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.834 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.094 [2024-09-29 21:40:45.965801] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:27.094 [2024-09-29 21:40:45.965848] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.094 [2024-09-29 21:40:45.968394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.094 [2024-09-29 21:40:45.968442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.094 [2024-09-29 21:40:45.968483] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.094 [2024-09-29 21:40:45.968493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:27.094 { 00:09:27.094 "results": [ 00:09:27.094 { 00:09:27.094 "job": "raid_bdev1", 00:09:27.094 "core_mask": "0x1", 00:09:27.094 "workload": "randrw", 00:09:27.094 "percentage": 50, 00:09:27.094 "status": "finished", 00:09:27.094 "queue_depth": 1, 00:09:27.094 "io_size": 131072, 00:09:27.094 "runtime": 1.348841, 00:09:27.094 "iops": 14493.18340708801, 00:09:27.094 "mibps": 1811.6479258860013, 00:09:27.094 "io_failed": 1, 00:09:27.094 "io_timeout": 0, 00:09:27.094 "avg_latency_us": 97.1923774891388, 00:09:27.094 "min_latency_us": 24.593886462882097, 00:09:27.094 "max_latency_us": 1359.3711790393013 00:09:27.094 } 00:09:27.094 ], 00:09:27.094 "core_count": 1 00:09:27.094 } 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67181 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67181 ']' 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67181 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.094 21:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67181 00:09:27.094 21:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:27.094 21:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:27.094 21:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67181' 00:09:27.094 killing process with pid 67181 00:09:27.094 21:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67181 00:09:27.094 [2024-09-29 21:40:46.017106] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:27.094 21:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67181 00:09:27.354 [2024-09-29 21:40:46.261120] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ewo1kgyyt8 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:28.734 00:09:28.734 real 0m4.715s 00:09:28.734 user 0m5.349s 00:09:28.734 sys 0m0.687s 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.734 21:40:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.734 ************************************ 00:09:28.734 END TEST raid_read_error_test 00:09:28.734 ************************************ 00:09:28.734 21:40:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:28.734 21:40:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:28.734 21:40:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.734 21:40:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.994 ************************************ 00:09:28.994 START TEST raid_write_error_test 00:09:28.994 ************************************ 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.22zyaGd5W9 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67330 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67330 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67330 ']' 00:09:28.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.994 21:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.995 [2024-09-29 21:40:47.833893] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:28.995 [2024-09-29 21:40:47.834006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67330 ] 00:09:29.253 [2024-09-29 21:40:47.996750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.511 [2024-09-29 21:40:48.248050] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.512 [2024-09-29 21:40:48.475417] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.512 [2024-09-29 21:40:48.475519] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.771 BaseBdev1_malloc 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.771 true 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.771 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.771 [2024-09-29 21:40:48.725189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:29.771 [2024-09-29 21:40:48.725256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.772 [2024-09-29 21:40:48.725276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:29.772 [2024-09-29 21:40:48.725287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.772 [2024-09-29 21:40:48.727734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.772 [2024-09-29 21:40:48.727776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:29.772 BaseBdev1 00:09:29.772 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:29.772 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:29.772 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.030 BaseBdev2_malloc 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.030 true 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.030 [2024-09-29 21:40:48.827389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:30.030 [2024-09-29 21:40:48.827451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.030 [2024-09-29 21:40:48.827468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:30.030 [2024-09-29 21:40:48.827481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.030 [2024-09-29 21:40:48.829876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.030 [2024-09-29 21:40:48.829919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:30.030 BaseBdev2 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.030 BaseBdev3_malloc 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.030 true 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.030 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.031 [2024-09-29 21:40:48.899443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:30.031 [2024-09-29 21:40:48.899500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.031 [2024-09-29 21:40:48.899517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:30.031 [2024-09-29 21:40:48.899529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.031 [2024-09-29 21:40:48.901898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.031 [2024-09-29 21:40:48.902025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:30.031 BaseBdev3 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.031 [2024-09-29 21:40:48.911508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.031 [2024-09-29 21:40:48.913563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.031 [2024-09-29 21:40:48.913657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.031 [2024-09-29 21:40:48.913859] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:30.031 [2024-09-29 21:40:48.913871] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.031 [2024-09-29 21:40:48.914144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:30.031 [2024-09-29 21:40:48.914317] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:30.031 [2024-09-29 21:40:48.914329] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:30.031 [2024-09-29 21:40:48.914493] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.031 "name": "raid_bdev1", 00:09:30.031 "uuid": "80f735f2-3c3d-4e51-a6ba-e26fd2ea4354", 00:09:30.031 "strip_size_kb": 64, 00:09:30.031 "state": "online", 00:09:30.031 "raid_level": "concat", 00:09:30.031 "superblock": true, 00:09:30.031 "num_base_bdevs": 3, 00:09:30.031 "num_base_bdevs_discovered": 3, 00:09:30.031 "num_base_bdevs_operational": 3, 00:09:30.031 "base_bdevs_list": [ 00:09:30.031 { 00:09:30.031 "name": "BaseBdev1", 00:09:30.031 "uuid": "62c61068-a311-5b1a-91c6-8ed60e1e7300", 00:09:30.031 "is_configured": true, 00:09:30.031 "data_offset": 2048, 00:09:30.031 "data_size": 63488 00:09:30.031 }, 00:09:30.031 { 00:09:30.031 "name": "BaseBdev2", 00:09:30.031 "uuid": "2be785c4-2a4d-51db-acff-470184e29cf1", 00:09:30.031 "is_configured": true, 00:09:30.031 "data_offset": 2048, 00:09:30.031 "data_size": 63488 00:09:30.031 }, 00:09:30.031 { 00:09:30.031 "name": "BaseBdev3", 00:09:30.031 "uuid": "743a2cb5-eb78-5c6c-8ee7-db71de868fa9", 00:09:30.031 "is_configured": true, 00:09:30.031 "data_offset": 2048, 00:09:30.031 "data_size": 63488 00:09:30.031 } 00:09:30.031 ] 00:09:30.031 }' 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.031 21:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.597 21:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:30.597 21:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:30.598 [2024-09-29 21:40:49.443936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.537 "name": "raid_bdev1", 00:09:31.537 "uuid": "80f735f2-3c3d-4e51-a6ba-e26fd2ea4354", 00:09:31.537 "strip_size_kb": 64, 00:09:31.537 "state": "online", 00:09:31.537 "raid_level": "concat", 00:09:31.537 "superblock": true, 00:09:31.537 "num_base_bdevs": 3, 00:09:31.537 "num_base_bdevs_discovered": 3, 00:09:31.537 "num_base_bdevs_operational": 3, 00:09:31.537 "base_bdevs_list": [ 00:09:31.537 { 00:09:31.537 "name": "BaseBdev1", 00:09:31.537 "uuid": "62c61068-a311-5b1a-91c6-8ed60e1e7300", 00:09:31.537 "is_configured": true, 00:09:31.537 "data_offset": 2048, 00:09:31.537 "data_size": 63488 00:09:31.537 }, 00:09:31.537 { 00:09:31.537 "name": "BaseBdev2", 00:09:31.537 "uuid": "2be785c4-2a4d-51db-acff-470184e29cf1", 00:09:31.537 "is_configured": true, 00:09:31.537 "data_offset": 2048, 00:09:31.537 "data_size": 63488 00:09:31.537 }, 00:09:31.537 { 00:09:31.537 "name": "BaseBdev3", 00:09:31.537 "uuid": "743a2cb5-eb78-5c6c-8ee7-db71de868fa9", 00:09:31.537 "is_configured": true, 00:09:31.537 "data_offset": 2048, 00:09:31.537 "data_size": 63488 00:09:31.537 } 00:09:31.537 ] 00:09:31.537 }' 00:09:31.537 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.538 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.797 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.797 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.797 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.057 [2024-09-29 21:40:50.780163] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.057 [2024-09-29 21:40:50.780205] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.057 [2024-09-29 21:40:50.782851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.057 [2024-09-29 21:40:50.782897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.057 [2024-09-29 21:40:50.782937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.057 [2024-09-29 21:40:50.782946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:32.057 { 00:09:32.057 "results": [ 00:09:32.057 { 00:09:32.057 "job": "raid_bdev1", 00:09:32.057 "core_mask": "0x1", 00:09:32.057 "workload": "randrw", 00:09:32.057 "percentage": 50, 00:09:32.057 "status": "finished", 00:09:32.057 "queue_depth": 1, 00:09:32.057 "io_size": 131072, 00:09:32.057 "runtime": 1.336779, 00:09:32.057 "iops": 14417.491597339575, 00:09:32.057 "mibps": 1802.1864496674468, 00:09:32.057 "io_failed": 1, 00:09:32.057 "io_timeout": 0, 00:09:32.057 "avg_latency_us": 97.71589901185978, 00:09:32.057 "min_latency_us": 24.482096069868994, 00:09:32.057 "max_latency_us": 1373.6803493449781 00:09:32.057 } 00:09:32.057 ], 00:09:32.057 "core_count": 1 00:09:32.057 } 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67330 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67330 ']' 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67330 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67330 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67330' 00:09:32.057 killing process with pid 67330 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67330 00:09:32.057 [2024-09-29 21:40:50.826465] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:32.057 21:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67330 00:09:32.316 [2024-09-29 21:40:51.069494] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.696 21:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.696 21:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.22zyaGd5W9 00:09:33.696 21:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.696 21:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:33.697 21:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:33.697 ************************************ 00:09:33.697 END TEST raid_write_error_test 00:09:33.697 ************************************ 00:09:33.697 21:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.697 21:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.697 21:40:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:33.697 00:09:33.697 real 0m4.736s 00:09:33.697 user 0m5.419s 00:09:33.697 sys 0m0.679s 00:09:33.697 21:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.697 21:40:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.697 21:40:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:33.697 21:40:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:33.697 21:40:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:33.697 21:40:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.697 21:40:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.697 ************************************ 00:09:33.697 START TEST raid_state_function_test 00:09:33.697 ************************************ 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:33.697 Process raid pid: 67468 00:09:33.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67468 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67468' 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67468 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67468 ']' 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.697 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:33.697 [2024-09-29 21:40:52.632846] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:33.697 [2024-09-29 21:40:52.632965] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.957 [2024-09-29 21:40:52.802388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.216 [2024-09-29 21:40:53.045654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.476 [2024-09-29 21:40:53.273031] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.476 [2024-09-29 21:40:53.273084] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.476 [2024-09-29 21:40:53.439247] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.476 [2024-09-29 21:40:53.439310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.476 [2024-09-29 21:40:53.439320] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.476 [2024-09-29 21:40:53.439330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.476 [2024-09-29 21:40:53.439338] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.476 [2024-09-29 21:40:53.439349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.476 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.735 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.735 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.735 "name": "Existed_Raid", 00:09:34.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.735 "strip_size_kb": 0, 00:09:34.735 "state": "configuring", 00:09:34.735 "raid_level": "raid1", 00:09:34.735 "superblock": false, 00:09:34.735 "num_base_bdevs": 3, 00:09:34.735 "num_base_bdevs_discovered": 0, 00:09:34.735 "num_base_bdevs_operational": 3, 00:09:34.735 "base_bdevs_list": [ 00:09:34.735 { 00:09:34.735 "name": "BaseBdev1", 00:09:34.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.735 "is_configured": false, 00:09:34.735 "data_offset": 0, 00:09:34.735 "data_size": 0 00:09:34.735 }, 00:09:34.735 { 00:09:34.735 "name": "BaseBdev2", 00:09:34.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.735 "is_configured": false, 00:09:34.735 "data_offset": 0, 00:09:34.735 "data_size": 0 00:09:34.735 }, 00:09:34.735 { 00:09:34.735 "name": "BaseBdev3", 00:09:34.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.735 "is_configured": false, 00:09:34.735 "data_offset": 0, 00:09:34.735 "data_size": 0 00:09:34.735 } 00:09:34.735 ] 00:09:34.735 }' 00:09:34.735 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.735 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.060 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.060 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.060 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.060 [2024-09-29 21:40:53.878409] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.060 [2024-09-29 21:40:53.878503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:35.060 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.060 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.060 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.060 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.060 [2024-09-29 21:40:53.890414] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:35.060 [2024-09-29 21:40:53.890498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:35.061 [2024-09-29 21:40:53.890524] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.061 [2024-09-29 21:40:53.890547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.061 [2024-09-29 21:40:53.890564] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.061 [2024-09-29 21:40:53.890584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.061 [2024-09-29 21:40:53.974327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.061 BaseBdev1 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.061 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.061 [ 00:09:35.061 { 00:09:35.061 "name": "BaseBdev1", 00:09:35.061 "aliases": [ 00:09:35.061 "589a6799-8b64-4b64-898e-b987ecf34d99" 00:09:35.061 ], 00:09:35.061 "product_name": "Malloc disk", 00:09:35.061 "block_size": 512, 00:09:35.061 "num_blocks": 65536, 00:09:35.061 "uuid": "589a6799-8b64-4b64-898e-b987ecf34d99", 00:09:35.061 "assigned_rate_limits": { 00:09:35.061 "rw_ios_per_sec": 0, 00:09:35.061 "rw_mbytes_per_sec": 0, 00:09:35.061 "r_mbytes_per_sec": 0, 00:09:35.061 "w_mbytes_per_sec": 0 00:09:35.061 }, 00:09:35.061 "claimed": true, 00:09:35.061 "claim_type": "exclusive_write", 00:09:35.061 "zoned": false, 00:09:35.061 "supported_io_types": { 00:09:35.061 "read": true, 00:09:35.061 "write": true, 00:09:35.061 "unmap": true, 00:09:35.061 "flush": true, 00:09:35.061 "reset": true, 00:09:35.061 "nvme_admin": false, 00:09:35.061 "nvme_io": false, 00:09:35.061 "nvme_io_md": false, 00:09:35.061 "write_zeroes": true, 00:09:35.061 "zcopy": true, 00:09:35.061 "get_zone_info": false, 00:09:35.061 "zone_management": false, 00:09:35.061 "zone_append": false, 00:09:35.061 "compare": false, 00:09:35.061 "compare_and_write": false, 00:09:35.061 "abort": true, 00:09:35.061 "seek_hole": false, 00:09:35.061 "seek_data": false, 00:09:35.061 "copy": true, 00:09:35.061 "nvme_iov_md": false 00:09:35.061 }, 00:09:35.061 "memory_domains": [ 00:09:35.061 { 00:09:35.061 "dma_device_id": "system", 00:09:35.061 "dma_device_type": 1 00:09:35.061 }, 00:09:35.061 { 00:09:35.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.061 "dma_device_type": 2 00:09:35.061 } 00:09:35.061 ], 00:09:35.061 "driver_specific": {} 00:09:35.061 } 00:09:35.061 ] 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.061 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.321 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.321 "name": "Existed_Raid", 00:09:35.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.321 "strip_size_kb": 0, 00:09:35.321 "state": "configuring", 00:09:35.321 "raid_level": "raid1", 00:09:35.321 "superblock": false, 00:09:35.321 "num_base_bdevs": 3, 00:09:35.321 "num_base_bdevs_discovered": 1, 00:09:35.321 "num_base_bdevs_operational": 3, 00:09:35.321 "base_bdevs_list": [ 00:09:35.321 { 00:09:35.321 "name": "BaseBdev1", 00:09:35.321 "uuid": "589a6799-8b64-4b64-898e-b987ecf34d99", 00:09:35.321 "is_configured": true, 00:09:35.321 "data_offset": 0, 00:09:35.321 "data_size": 65536 00:09:35.321 }, 00:09:35.321 { 00:09:35.321 "name": "BaseBdev2", 00:09:35.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.321 "is_configured": false, 00:09:35.321 "data_offset": 0, 00:09:35.321 "data_size": 0 00:09:35.321 }, 00:09:35.321 { 00:09:35.321 "name": "BaseBdev3", 00:09:35.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.321 "is_configured": false, 00:09:35.321 "data_offset": 0, 00:09:35.321 "data_size": 0 00:09:35.321 } 00:09:35.321 ] 00:09:35.321 }' 00:09:35.321 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.321 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.581 [2024-09-29 21:40:54.449589] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.581 [2024-09-29 21:40:54.449633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.581 [2024-09-29 21:40:54.461618] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.581 [2024-09-29 21:40:54.463744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.581 [2024-09-29 21:40:54.463789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.581 [2024-09-29 21:40:54.463799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.581 [2024-09-29 21:40:54.463808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.581 "name": "Existed_Raid", 00:09:35.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.581 "strip_size_kb": 0, 00:09:35.581 "state": "configuring", 00:09:35.581 "raid_level": "raid1", 00:09:35.581 "superblock": false, 00:09:35.581 "num_base_bdevs": 3, 00:09:35.581 "num_base_bdevs_discovered": 1, 00:09:35.581 "num_base_bdevs_operational": 3, 00:09:35.581 "base_bdevs_list": [ 00:09:35.581 { 00:09:35.581 "name": "BaseBdev1", 00:09:35.581 "uuid": "589a6799-8b64-4b64-898e-b987ecf34d99", 00:09:35.581 "is_configured": true, 00:09:35.581 "data_offset": 0, 00:09:35.581 "data_size": 65536 00:09:35.581 }, 00:09:35.581 { 00:09:35.581 "name": "BaseBdev2", 00:09:35.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.581 "is_configured": false, 00:09:35.581 "data_offset": 0, 00:09:35.581 "data_size": 0 00:09:35.581 }, 00:09:35.581 { 00:09:35.581 "name": "BaseBdev3", 00:09:35.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.581 "is_configured": false, 00:09:35.581 "data_offset": 0, 00:09:35.581 "data_size": 0 00:09:35.581 } 00:09:35.581 ] 00:09:35.581 }' 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.581 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.151 [2024-09-29 21:40:54.933248] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.151 BaseBdev2 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.151 [ 00:09:36.151 { 00:09:36.151 "name": "BaseBdev2", 00:09:36.151 "aliases": [ 00:09:36.151 "737f078b-42f2-4f19-981d-93bb69a4cfe2" 00:09:36.151 ], 00:09:36.151 "product_name": "Malloc disk", 00:09:36.151 "block_size": 512, 00:09:36.151 "num_blocks": 65536, 00:09:36.151 "uuid": "737f078b-42f2-4f19-981d-93bb69a4cfe2", 00:09:36.151 "assigned_rate_limits": { 00:09:36.151 "rw_ios_per_sec": 0, 00:09:36.151 "rw_mbytes_per_sec": 0, 00:09:36.151 "r_mbytes_per_sec": 0, 00:09:36.151 "w_mbytes_per_sec": 0 00:09:36.151 }, 00:09:36.151 "claimed": true, 00:09:36.151 "claim_type": "exclusive_write", 00:09:36.151 "zoned": false, 00:09:36.151 "supported_io_types": { 00:09:36.151 "read": true, 00:09:36.151 "write": true, 00:09:36.151 "unmap": true, 00:09:36.151 "flush": true, 00:09:36.151 "reset": true, 00:09:36.151 "nvme_admin": false, 00:09:36.151 "nvme_io": false, 00:09:36.151 "nvme_io_md": false, 00:09:36.151 "write_zeroes": true, 00:09:36.151 "zcopy": true, 00:09:36.151 "get_zone_info": false, 00:09:36.151 "zone_management": false, 00:09:36.151 "zone_append": false, 00:09:36.151 "compare": false, 00:09:36.151 "compare_and_write": false, 00:09:36.151 "abort": true, 00:09:36.151 "seek_hole": false, 00:09:36.151 "seek_data": false, 00:09:36.151 "copy": true, 00:09:36.151 "nvme_iov_md": false 00:09:36.151 }, 00:09:36.151 "memory_domains": [ 00:09:36.151 { 00:09:36.151 "dma_device_id": "system", 00:09:36.151 "dma_device_type": 1 00:09:36.151 }, 00:09:36.151 { 00:09:36.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.151 "dma_device_type": 2 00:09:36.151 } 00:09:36.151 ], 00:09:36.151 "driver_specific": {} 00:09:36.151 } 00:09:36.151 ] 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.151 21:40:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.151 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.151 "name": "Existed_Raid", 00:09:36.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.151 "strip_size_kb": 0, 00:09:36.151 "state": "configuring", 00:09:36.151 "raid_level": "raid1", 00:09:36.151 "superblock": false, 00:09:36.151 "num_base_bdevs": 3, 00:09:36.151 "num_base_bdevs_discovered": 2, 00:09:36.151 "num_base_bdevs_operational": 3, 00:09:36.151 "base_bdevs_list": [ 00:09:36.151 { 00:09:36.151 "name": "BaseBdev1", 00:09:36.151 "uuid": "589a6799-8b64-4b64-898e-b987ecf34d99", 00:09:36.151 "is_configured": true, 00:09:36.151 "data_offset": 0, 00:09:36.151 "data_size": 65536 00:09:36.151 }, 00:09:36.151 { 00:09:36.151 "name": "BaseBdev2", 00:09:36.151 "uuid": "737f078b-42f2-4f19-981d-93bb69a4cfe2", 00:09:36.151 "is_configured": true, 00:09:36.151 "data_offset": 0, 00:09:36.151 "data_size": 65536 00:09:36.151 }, 00:09:36.151 { 00:09:36.151 "name": "BaseBdev3", 00:09:36.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.151 "is_configured": false, 00:09:36.151 "data_offset": 0, 00:09:36.151 "data_size": 0 00:09:36.151 } 00:09:36.151 ] 00:09:36.151 }' 00:09:36.151 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.151 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.719 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.719 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.719 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.720 [2024-09-29 21:40:55.446214] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.720 [2024-09-29 21:40:55.446335] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:36.720 [2024-09-29 21:40:55.446375] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:36.720 [2024-09-29 21:40:55.446712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:36.720 [2024-09-29 21:40:55.446938] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:36.720 [2024-09-29 21:40:55.446980] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:36.720 [2024-09-29 21:40:55.447296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.720 BaseBdev3 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.720 [ 00:09:36.720 { 00:09:36.720 "name": "BaseBdev3", 00:09:36.720 "aliases": [ 00:09:36.720 "7877f546-11e6-45e0-9e55-8d658aae6eec" 00:09:36.720 ], 00:09:36.720 "product_name": "Malloc disk", 00:09:36.720 "block_size": 512, 00:09:36.720 "num_blocks": 65536, 00:09:36.720 "uuid": "7877f546-11e6-45e0-9e55-8d658aae6eec", 00:09:36.720 "assigned_rate_limits": { 00:09:36.720 "rw_ios_per_sec": 0, 00:09:36.720 "rw_mbytes_per_sec": 0, 00:09:36.720 "r_mbytes_per_sec": 0, 00:09:36.720 "w_mbytes_per_sec": 0 00:09:36.720 }, 00:09:36.720 "claimed": true, 00:09:36.720 "claim_type": "exclusive_write", 00:09:36.720 "zoned": false, 00:09:36.720 "supported_io_types": { 00:09:36.720 "read": true, 00:09:36.720 "write": true, 00:09:36.720 "unmap": true, 00:09:36.720 "flush": true, 00:09:36.720 "reset": true, 00:09:36.720 "nvme_admin": false, 00:09:36.720 "nvme_io": false, 00:09:36.720 "nvme_io_md": false, 00:09:36.720 "write_zeroes": true, 00:09:36.720 "zcopy": true, 00:09:36.720 "get_zone_info": false, 00:09:36.720 "zone_management": false, 00:09:36.720 "zone_append": false, 00:09:36.720 "compare": false, 00:09:36.720 "compare_and_write": false, 00:09:36.720 "abort": true, 00:09:36.720 "seek_hole": false, 00:09:36.720 "seek_data": false, 00:09:36.720 "copy": true, 00:09:36.720 "nvme_iov_md": false 00:09:36.720 }, 00:09:36.720 "memory_domains": [ 00:09:36.720 { 00:09:36.720 "dma_device_id": "system", 00:09:36.720 "dma_device_type": 1 00:09:36.720 }, 00:09:36.720 { 00:09:36.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.720 "dma_device_type": 2 00:09:36.720 } 00:09:36.720 ], 00:09:36.720 "driver_specific": {} 00:09:36.720 } 00:09:36.720 ] 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.720 "name": "Existed_Raid", 00:09:36.720 "uuid": "40892960-4272-4cac-a96a-c65047a3db6e", 00:09:36.720 "strip_size_kb": 0, 00:09:36.720 "state": "online", 00:09:36.720 "raid_level": "raid1", 00:09:36.720 "superblock": false, 00:09:36.720 "num_base_bdevs": 3, 00:09:36.720 "num_base_bdevs_discovered": 3, 00:09:36.720 "num_base_bdevs_operational": 3, 00:09:36.720 "base_bdevs_list": [ 00:09:36.720 { 00:09:36.720 "name": "BaseBdev1", 00:09:36.720 "uuid": "589a6799-8b64-4b64-898e-b987ecf34d99", 00:09:36.720 "is_configured": true, 00:09:36.720 "data_offset": 0, 00:09:36.720 "data_size": 65536 00:09:36.720 }, 00:09:36.720 { 00:09:36.720 "name": "BaseBdev2", 00:09:36.720 "uuid": "737f078b-42f2-4f19-981d-93bb69a4cfe2", 00:09:36.720 "is_configured": true, 00:09:36.720 "data_offset": 0, 00:09:36.720 "data_size": 65536 00:09:36.720 }, 00:09:36.720 { 00:09:36.720 "name": "BaseBdev3", 00:09:36.720 "uuid": "7877f546-11e6-45e0-9e55-8d658aae6eec", 00:09:36.720 "is_configured": true, 00:09:36.720 "data_offset": 0, 00:09:36.720 "data_size": 65536 00:09:36.720 } 00:09:36.720 ] 00:09:36.720 }' 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.720 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.980 [2024-09-29 21:40:55.905773] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:36.980 "name": "Existed_Raid", 00:09:36.980 "aliases": [ 00:09:36.980 "40892960-4272-4cac-a96a-c65047a3db6e" 00:09:36.980 ], 00:09:36.980 "product_name": "Raid Volume", 00:09:36.980 "block_size": 512, 00:09:36.980 "num_blocks": 65536, 00:09:36.980 "uuid": "40892960-4272-4cac-a96a-c65047a3db6e", 00:09:36.980 "assigned_rate_limits": { 00:09:36.980 "rw_ios_per_sec": 0, 00:09:36.980 "rw_mbytes_per_sec": 0, 00:09:36.980 "r_mbytes_per_sec": 0, 00:09:36.980 "w_mbytes_per_sec": 0 00:09:36.980 }, 00:09:36.980 "claimed": false, 00:09:36.980 "zoned": false, 00:09:36.980 "supported_io_types": { 00:09:36.980 "read": true, 00:09:36.980 "write": true, 00:09:36.980 "unmap": false, 00:09:36.980 "flush": false, 00:09:36.980 "reset": true, 00:09:36.980 "nvme_admin": false, 00:09:36.980 "nvme_io": false, 00:09:36.980 "nvme_io_md": false, 00:09:36.980 "write_zeroes": true, 00:09:36.980 "zcopy": false, 00:09:36.980 "get_zone_info": false, 00:09:36.980 "zone_management": false, 00:09:36.980 "zone_append": false, 00:09:36.980 "compare": false, 00:09:36.980 "compare_and_write": false, 00:09:36.980 "abort": false, 00:09:36.980 "seek_hole": false, 00:09:36.980 "seek_data": false, 00:09:36.980 "copy": false, 00:09:36.980 "nvme_iov_md": false 00:09:36.980 }, 00:09:36.980 "memory_domains": [ 00:09:36.980 { 00:09:36.980 "dma_device_id": "system", 00:09:36.980 "dma_device_type": 1 00:09:36.980 }, 00:09:36.980 { 00:09:36.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.980 "dma_device_type": 2 00:09:36.980 }, 00:09:36.980 { 00:09:36.980 "dma_device_id": "system", 00:09:36.980 "dma_device_type": 1 00:09:36.980 }, 00:09:36.980 { 00:09:36.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.980 "dma_device_type": 2 00:09:36.980 }, 00:09:36.980 { 00:09:36.980 "dma_device_id": "system", 00:09:36.980 "dma_device_type": 1 00:09:36.980 }, 00:09:36.980 { 00:09:36.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.980 "dma_device_type": 2 00:09:36.980 } 00:09:36.980 ], 00:09:36.980 "driver_specific": { 00:09:36.980 "raid": { 00:09:36.980 "uuid": "40892960-4272-4cac-a96a-c65047a3db6e", 00:09:36.980 "strip_size_kb": 0, 00:09:36.980 "state": "online", 00:09:36.980 "raid_level": "raid1", 00:09:36.980 "superblock": false, 00:09:36.980 "num_base_bdevs": 3, 00:09:36.980 "num_base_bdevs_discovered": 3, 00:09:36.980 "num_base_bdevs_operational": 3, 00:09:36.980 "base_bdevs_list": [ 00:09:36.980 { 00:09:36.980 "name": "BaseBdev1", 00:09:36.980 "uuid": "589a6799-8b64-4b64-898e-b987ecf34d99", 00:09:36.980 "is_configured": true, 00:09:36.980 "data_offset": 0, 00:09:36.980 "data_size": 65536 00:09:36.980 }, 00:09:36.980 { 00:09:36.980 "name": "BaseBdev2", 00:09:36.980 "uuid": "737f078b-42f2-4f19-981d-93bb69a4cfe2", 00:09:36.980 "is_configured": true, 00:09:36.980 "data_offset": 0, 00:09:36.980 "data_size": 65536 00:09:36.980 }, 00:09:36.980 { 00:09:36.980 "name": "BaseBdev3", 00:09:36.980 "uuid": "7877f546-11e6-45e0-9e55-8d658aae6eec", 00:09:36.980 "is_configured": true, 00:09:36.980 "data_offset": 0, 00:09:36.980 "data_size": 65536 00:09:36.980 } 00:09:36.980 ] 00:09:36.980 } 00:09:36.980 } 00:09:36.980 }' 00:09:36.980 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.241 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:37.241 BaseBdev2 00:09:37.241 BaseBdev3' 00:09:37.241 21:40:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.241 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.241 [2024-09-29 21:40:56.192985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.500 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.501 "name": "Existed_Raid", 00:09:37.501 "uuid": "40892960-4272-4cac-a96a-c65047a3db6e", 00:09:37.501 "strip_size_kb": 0, 00:09:37.501 "state": "online", 00:09:37.501 "raid_level": "raid1", 00:09:37.501 "superblock": false, 00:09:37.501 "num_base_bdevs": 3, 00:09:37.501 "num_base_bdevs_discovered": 2, 00:09:37.501 "num_base_bdevs_operational": 2, 00:09:37.501 "base_bdevs_list": [ 00:09:37.501 { 00:09:37.501 "name": null, 00:09:37.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.501 "is_configured": false, 00:09:37.501 "data_offset": 0, 00:09:37.501 "data_size": 65536 00:09:37.501 }, 00:09:37.501 { 00:09:37.501 "name": "BaseBdev2", 00:09:37.501 "uuid": "737f078b-42f2-4f19-981d-93bb69a4cfe2", 00:09:37.501 "is_configured": true, 00:09:37.501 "data_offset": 0, 00:09:37.501 "data_size": 65536 00:09:37.501 }, 00:09:37.501 { 00:09:37.501 "name": "BaseBdev3", 00:09:37.501 "uuid": "7877f546-11e6-45e0-9e55-8d658aae6eec", 00:09:37.501 "is_configured": true, 00:09:37.501 "data_offset": 0, 00:09:37.501 "data_size": 65536 00:09:37.501 } 00:09:37.501 ] 00:09:37.501 }' 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.501 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.069 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:38.069 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.070 [2024-09-29 21:40:56.805472] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.070 21:40:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.070 [2024-09-29 21:40:56.968021] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.070 [2024-09-29 21:40:56.968150] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.330 [2024-09-29 21:40:57.069855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.330 [2024-09-29 21:40:57.069906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.330 [2024-09-29 21:40:57.069919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.330 BaseBdev2 00:09:38.330 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.331 [ 00:09:38.331 { 00:09:38.331 "name": "BaseBdev2", 00:09:38.331 "aliases": [ 00:09:38.331 "25e97ab4-384c-410d-81ad-13807df51d28" 00:09:38.331 ], 00:09:38.331 "product_name": "Malloc disk", 00:09:38.331 "block_size": 512, 00:09:38.331 "num_blocks": 65536, 00:09:38.331 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:38.331 "assigned_rate_limits": { 00:09:38.331 "rw_ios_per_sec": 0, 00:09:38.331 "rw_mbytes_per_sec": 0, 00:09:38.331 "r_mbytes_per_sec": 0, 00:09:38.331 "w_mbytes_per_sec": 0 00:09:38.331 }, 00:09:38.331 "claimed": false, 00:09:38.331 "zoned": false, 00:09:38.331 "supported_io_types": { 00:09:38.331 "read": true, 00:09:38.331 "write": true, 00:09:38.331 "unmap": true, 00:09:38.331 "flush": true, 00:09:38.331 "reset": true, 00:09:38.331 "nvme_admin": false, 00:09:38.331 "nvme_io": false, 00:09:38.331 "nvme_io_md": false, 00:09:38.331 "write_zeroes": true, 00:09:38.331 "zcopy": true, 00:09:38.331 "get_zone_info": false, 00:09:38.331 "zone_management": false, 00:09:38.331 "zone_append": false, 00:09:38.331 "compare": false, 00:09:38.331 "compare_and_write": false, 00:09:38.331 "abort": true, 00:09:38.331 "seek_hole": false, 00:09:38.331 "seek_data": false, 00:09:38.331 "copy": true, 00:09:38.331 "nvme_iov_md": false 00:09:38.331 }, 00:09:38.331 "memory_domains": [ 00:09:38.331 { 00:09:38.331 "dma_device_id": "system", 00:09:38.331 "dma_device_type": 1 00:09:38.331 }, 00:09:38.331 { 00:09:38.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.331 "dma_device_type": 2 00:09:38.331 } 00:09:38.331 ], 00:09:38.331 "driver_specific": {} 00:09:38.331 } 00:09:38.331 ] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.331 BaseBdev3 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.331 [ 00:09:38.331 { 00:09:38.331 "name": "BaseBdev3", 00:09:38.331 "aliases": [ 00:09:38.331 "29aa2f20-4afe-408a-9b7f-1d0052692a2c" 00:09:38.331 ], 00:09:38.331 "product_name": "Malloc disk", 00:09:38.331 "block_size": 512, 00:09:38.331 "num_blocks": 65536, 00:09:38.331 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:38.331 "assigned_rate_limits": { 00:09:38.331 "rw_ios_per_sec": 0, 00:09:38.331 "rw_mbytes_per_sec": 0, 00:09:38.331 "r_mbytes_per_sec": 0, 00:09:38.331 "w_mbytes_per_sec": 0 00:09:38.331 }, 00:09:38.331 "claimed": false, 00:09:38.331 "zoned": false, 00:09:38.331 "supported_io_types": { 00:09:38.331 "read": true, 00:09:38.331 "write": true, 00:09:38.331 "unmap": true, 00:09:38.331 "flush": true, 00:09:38.331 "reset": true, 00:09:38.331 "nvme_admin": false, 00:09:38.331 "nvme_io": false, 00:09:38.331 "nvme_io_md": false, 00:09:38.331 "write_zeroes": true, 00:09:38.331 "zcopy": true, 00:09:38.331 "get_zone_info": false, 00:09:38.331 "zone_management": false, 00:09:38.331 "zone_append": false, 00:09:38.331 "compare": false, 00:09:38.331 "compare_and_write": false, 00:09:38.331 "abort": true, 00:09:38.331 "seek_hole": false, 00:09:38.331 "seek_data": false, 00:09:38.331 "copy": true, 00:09:38.331 "nvme_iov_md": false 00:09:38.331 }, 00:09:38.331 "memory_domains": [ 00:09:38.331 { 00:09:38.331 "dma_device_id": "system", 00:09:38.331 "dma_device_type": 1 00:09:38.331 }, 00:09:38.331 { 00:09:38.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.331 "dma_device_type": 2 00:09:38.331 } 00:09:38.331 ], 00:09:38.331 "driver_specific": {} 00:09:38.331 } 00:09:38.331 ] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.331 [2024-09-29 21:40:57.294079] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.331 [2024-09-29 21:40:57.294202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.331 [2024-09-29 21:40:57.294241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.331 [2024-09-29 21:40:57.296322] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.331 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.592 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.592 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.592 "name": "Existed_Raid", 00:09:38.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.592 "strip_size_kb": 0, 00:09:38.592 "state": "configuring", 00:09:38.592 "raid_level": "raid1", 00:09:38.592 "superblock": false, 00:09:38.592 "num_base_bdevs": 3, 00:09:38.592 "num_base_bdevs_discovered": 2, 00:09:38.592 "num_base_bdevs_operational": 3, 00:09:38.592 "base_bdevs_list": [ 00:09:38.592 { 00:09:38.592 "name": "BaseBdev1", 00:09:38.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.592 "is_configured": false, 00:09:38.592 "data_offset": 0, 00:09:38.592 "data_size": 0 00:09:38.592 }, 00:09:38.592 { 00:09:38.592 "name": "BaseBdev2", 00:09:38.592 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:38.592 "is_configured": true, 00:09:38.592 "data_offset": 0, 00:09:38.592 "data_size": 65536 00:09:38.592 }, 00:09:38.592 { 00:09:38.592 "name": "BaseBdev3", 00:09:38.592 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:38.592 "is_configured": true, 00:09:38.592 "data_offset": 0, 00:09:38.592 "data_size": 65536 00:09:38.592 } 00:09:38.592 ] 00:09:38.592 }' 00:09:38.592 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.592 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.854 [2024-09-29 21:40:57.705304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.854 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.854 "name": "Existed_Raid", 00:09:38.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.854 "strip_size_kb": 0, 00:09:38.854 "state": "configuring", 00:09:38.854 "raid_level": "raid1", 00:09:38.854 "superblock": false, 00:09:38.854 "num_base_bdevs": 3, 00:09:38.854 "num_base_bdevs_discovered": 1, 00:09:38.854 "num_base_bdevs_operational": 3, 00:09:38.854 "base_bdevs_list": [ 00:09:38.854 { 00:09:38.854 "name": "BaseBdev1", 00:09:38.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.855 "is_configured": false, 00:09:38.855 "data_offset": 0, 00:09:38.855 "data_size": 0 00:09:38.855 }, 00:09:38.855 { 00:09:38.855 "name": null, 00:09:38.855 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:38.855 "is_configured": false, 00:09:38.855 "data_offset": 0, 00:09:38.855 "data_size": 65536 00:09:38.855 }, 00:09:38.855 { 00:09:38.855 "name": "BaseBdev3", 00:09:38.855 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:38.855 "is_configured": true, 00:09:38.855 "data_offset": 0, 00:09:38.855 "data_size": 65536 00:09:38.855 } 00:09:38.855 ] 00:09:38.855 }' 00:09:38.855 21:40:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.855 21:40:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.446 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 [2024-09-29 21:40:58.271680] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.447 BaseBdev1 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 [ 00:09:39.447 { 00:09:39.447 "name": "BaseBdev1", 00:09:39.447 "aliases": [ 00:09:39.447 "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1" 00:09:39.447 ], 00:09:39.447 "product_name": "Malloc disk", 00:09:39.447 "block_size": 512, 00:09:39.447 "num_blocks": 65536, 00:09:39.447 "uuid": "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1", 00:09:39.447 "assigned_rate_limits": { 00:09:39.447 "rw_ios_per_sec": 0, 00:09:39.447 "rw_mbytes_per_sec": 0, 00:09:39.447 "r_mbytes_per_sec": 0, 00:09:39.447 "w_mbytes_per_sec": 0 00:09:39.447 }, 00:09:39.447 "claimed": true, 00:09:39.447 "claim_type": "exclusive_write", 00:09:39.447 "zoned": false, 00:09:39.447 "supported_io_types": { 00:09:39.447 "read": true, 00:09:39.447 "write": true, 00:09:39.447 "unmap": true, 00:09:39.447 "flush": true, 00:09:39.447 "reset": true, 00:09:39.447 "nvme_admin": false, 00:09:39.447 "nvme_io": false, 00:09:39.447 "nvme_io_md": false, 00:09:39.447 "write_zeroes": true, 00:09:39.447 "zcopy": true, 00:09:39.447 "get_zone_info": false, 00:09:39.447 "zone_management": false, 00:09:39.447 "zone_append": false, 00:09:39.447 "compare": false, 00:09:39.447 "compare_and_write": false, 00:09:39.447 "abort": true, 00:09:39.447 "seek_hole": false, 00:09:39.447 "seek_data": false, 00:09:39.447 "copy": true, 00:09:39.447 "nvme_iov_md": false 00:09:39.447 }, 00:09:39.447 "memory_domains": [ 00:09:39.447 { 00:09:39.447 "dma_device_id": "system", 00:09:39.447 "dma_device_type": 1 00:09:39.447 }, 00:09:39.447 { 00:09:39.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.447 "dma_device_type": 2 00:09:39.447 } 00:09:39.447 ], 00:09:39.447 "driver_specific": {} 00:09:39.447 } 00:09:39.447 ] 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.447 "name": "Existed_Raid", 00:09:39.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.447 "strip_size_kb": 0, 00:09:39.447 "state": "configuring", 00:09:39.447 "raid_level": "raid1", 00:09:39.447 "superblock": false, 00:09:39.447 "num_base_bdevs": 3, 00:09:39.447 "num_base_bdevs_discovered": 2, 00:09:39.447 "num_base_bdevs_operational": 3, 00:09:39.447 "base_bdevs_list": [ 00:09:39.447 { 00:09:39.447 "name": "BaseBdev1", 00:09:39.447 "uuid": "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1", 00:09:39.447 "is_configured": true, 00:09:39.447 "data_offset": 0, 00:09:39.447 "data_size": 65536 00:09:39.447 }, 00:09:39.447 { 00:09:39.447 "name": null, 00:09:39.447 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:39.447 "is_configured": false, 00:09:39.447 "data_offset": 0, 00:09:39.447 "data_size": 65536 00:09:39.447 }, 00:09:39.447 { 00:09:39.447 "name": "BaseBdev3", 00:09:39.447 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:39.447 "is_configured": true, 00:09:39.447 "data_offset": 0, 00:09:39.447 "data_size": 65536 00:09:39.447 } 00:09:39.447 ] 00:09:39.447 }' 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.447 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.029 [2024-09-29 21:40:58.790845] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.029 "name": "Existed_Raid", 00:09:40.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.029 "strip_size_kb": 0, 00:09:40.029 "state": "configuring", 00:09:40.029 "raid_level": "raid1", 00:09:40.029 "superblock": false, 00:09:40.029 "num_base_bdevs": 3, 00:09:40.029 "num_base_bdevs_discovered": 1, 00:09:40.029 "num_base_bdevs_operational": 3, 00:09:40.029 "base_bdevs_list": [ 00:09:40.029 { 00:09:40.029 "name": "BaseBdev1", 00:09:40.029 "uuid": "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1", 00:09:40.029 "is_configured": true, 00:09:40.029 "data_offset": 0, 00:09:40.029 "data_size": 65536 00:09:40.029 }, 00:09:40.029 { 00:09:40.029 "name": null, 00:09:40.029 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:40.029 "is_configured": false, 00:09:40.029 "data_offset": 0, 00:09:40.029 "data_size": 65536 00:09:40.029 }, 00:09:40.029 { 00:09:40.029 "name": null, 00:09:40.029 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:40.029 "is_configured": false, 00:09:40.029 "data_offset": 0, 00:09:40.029 "data_size": 65536 00:09:40.029 } 00:09:40.029 ] 00:09:40.029 }' 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.029 21:40:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.288 [2024-09-29 21:40:59.246068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.288 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.547 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.547 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.547 "name": "Existed_Raid", 00:09:40.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.547 "strip_size_kb": 0, 00:09:40.547 "state": "configuring", 00:09:40.547 "raid_level": "raid1", 00:09:40.547 "superblock": false, 00:09:40.547 "num_base_bdevs": 3, 00:09:40.547 "num_base_bdevs_discovered": 2, 00:09:40.547 "num_base_bdevs_operational": 3, 00:09:40.547 "base_bdevs_list": [ 00:09:40.547 { 00:09:40.547 "name": "BaseBdev1", 00:09:40.547 "uuid": "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1", 00:09:40.547 "is_configured": true, 00:09:40.547 "data_offset": 0, 00:09:40.547 "data_size": 65536 00:09:40.547 }, 00:09:40.547 { 00:09:40.547 "name": null, 00:09:40.547 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:40.547 "is_configured": false, 00:09:40.547 "data_offset": 0, 00:09:40.547 "data_size": 65536 00:09:40.547 }, 00:09:40.547 { 00:09:40.547 "name": "BaseBdev3", 00:09:40.547 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:40.547 "is_configured": true, 00:09:40.547 "data_offset": 0, 00:09:40.547 "data_size": 65536 00:09:40.547 } 00:09:40.547 ] 00:09:40.547 }' 00:09:40.547 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.547 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.807 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.807 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.807 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.807 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:40.807 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.807 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.807 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.807 [2024-09-29 21:40:59.725266] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.067 "name": "Existed_Raid", 00:09:41.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.067 "strip_size_kb": 0, 00:09:41.067 "state": "configuring", 00:09:41.067 "raid_level": "raid1", 00:09:41.067 "superblock": false, 00:09:41.067 "num_base_bdevs": 3, 00:09:41.067 "num_base_bdevs_discovered": 1, 00:09:41.067 "num_base_bdevs_operational": 3, 00:09:41.067 "base_bdevs_list": [ 00:09:41.067 { 00:09:41.067 "name": null, 00:09:41.067 "uuid": "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1", 00:09:41.067 "is_configured": false, 00:09:41.067 "data_offset": 0, 00:09:41.067 "data_size": 65536 00:09:41.067 }, 00:09:41.067 { 00:09:41.067 "name": null, 00:09:41.067 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:41.067 "is_configured": false, 00:09:41.067 "data_offset": 0, 00:09:41.067 "data_size": 65536 00:09:41.067 }, 00:09:41.067 { 00:09:41.067 "name": "BaseBdev3", 00:09:41.067 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:41.067 "is_configured": true, 00:09:41.067 "data_offset": 0, 00:09:41.067 "data_size": 65536 00:09:41.067 } 00:09:41.067 ] 00:09:41.067 }' 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.067 21:40:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.637 [2024-09-29 21:41:00.385361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.637 "name": "Existed_Raid", 00:09:41.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.637 "strip_size_kb": 0, 00:09:41.637 "state": "configuring", 00:09:41.637 "raid_level": "raid1", 00:09:41.637 "superblock": false, 00:09:41.637 "num_base_bdevs": 3, 00:09:41.637 "num_base_bdevs_discovered": 2, 00:09:41.637 "num_base_bdevs_operational": 3, 00:09:41.637 "base_bdevs_list": [ 00:09:41.637 { 00:09:41.637 "name": null, 00:09:41.637 "uuid": "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1", 00:09:41.637 "is_configured": false, 00:09:41.637 "data_offset": 0, 00:09:41.637 "data_size": 65536 00:09:41.637 }, 00:09:41.637 { 00:09:41.637 "name": "BaseBdev2", 00:09:41.637 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:41.637 "is_configured": true, 00:09:41.637 "data_offset": 0, 00:09:41.637 "data_size": 65536 00:09:41.637 }, 00:09:41.637 { 00:09:41.637 "name": "BaseBdev3", 00:09:41.637 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:41.637 "is_configured": true, 00:09:41.637 "data_offset": 0, 00:09:41.637 "data_size": 65536 00:09:41.637 } 00:09:41.637 ] 00:09:41.637 }' 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.637 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ed3097b9-70bc-4ccc-ad98-86a63a97dcc1 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.897 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.157 [2024-09-29 21:41:00.909978] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:42.157 [2024-09-29 21:41:00.910027] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:42.157 [2024-09-29 21:41:00.910035] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:42.157 [2024-09-29 21:41:00.910353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:42.157 [2024-09-29 21:41:00.910545] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:42.157 [2024-09-29 21:41:00.910559] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:42.157 [2024-09-29 21:41:00.910820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.157 NewBaseBdev 00:09:42.157 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.157 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:42.157 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:42.157 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.158 [ 00:09:42.158 { 00:09:42.158 "name": "NewBaseBdev", 00:09:42.158 "aliases": [ 00:09:42.158 "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1" 00:09:42.158 ], 00:09:42.158 "product_name": "Malloc disk", 00:09:42.158 "block_size": 512, 00:09:42.158 "num_blocks": 65536, 00:09:42.158 "uuid": "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1", 00:09:42.158 "assigned_rate_limits": { 00:09:42.158 "rw_ios_per_sec": 0, 00:09:42.158 "rw_mbytes_per_sec": 0, 00:09:42.158 "r_mbytes_per_sec": 0, 00:09:42.158 "w_mbytes_per_sec": 0 00:09:42.158 }, 00:09:42.158 "claimed": true, 00:09:42.158 "claim_type": "exclusive_write", 00:09:42.158 "zoned": false, 00:09:42.158 "supported_io_types": { 00:09:42.158 "read": true, 00:09:42.158 "write": true, 00:09:42.158 "unmap": true, 00:09:42.158 "flush": true, 00:09:42.158 "reset": true, 00:09:42.158 "nvme_admin": false, 00:09:42.158 "nvme_io": false, 00:09:42.158 "nvme_io_md": false, 00:09:42.158 "write_zeroes": true, 00:09:42.158 "zcopy": true, 00:09:42.158 "get_zone_info": false, 00:09:42.158 "zone_management": false, 00:09:42.158 "zone_append": false, 00:09:42.158 "compare": false, 00:09:42.158 "compare_and_write": false, 00:09:42.158 "abort": true, 00:09:42.158 "seek_hole": false, 00:09:42.158 "seek_data": false, 00:09:42.158 "copy": true, 00:09:42.158 "nvme_iov_md": false 00:09:42.158 }, 00:09:42.158 "memory_domains": [ 00:09:42.158 { 00:09:42.158 "dma_device_id": "system", 00:09:42.158 "dma_device_type": 1 00:09:42.158 }, 00:09:42.158 { 00:09:42.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.158 "dma_device_type": 2 00:09:42.158 } 00:09:42.158 ], 00:09:42.158 "driver_specific": {} 00:09:42.158 } 00:09:42.158 ] 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.158 "name": "Existed_Raid", 00:09:42.158 "uuid": "de0e7d85-cf29-49e6-99e3-ff6cee5a2ff3", 00:09:42.158 "strip_size_kb": 0, 00:09:42.158 "state": "online", 00:09:42.158 "raid_level": "raid1", 00:09:42.158 "superblock": false, 00:09:42.158 "num_base_bdevs": 3, 00:09:42.158 "num_base_bdevs_discovered": 3, 00:09:42.158 "num_base_bdevs_operational": 3, 00:09:42.158 "base_bdevs_list": [ 00:09:42.158 { 00:09:42.158 "name": "NewBaseBdev", 00:09:42.158 "uuid": "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1", 00:09:42.158 "is_configured": true, 00:09:42.158 "data_offset": 0, 00:09:42.158 "data_size": 65536 00:09:42.158 }, 00:09:42.158 { 00:09:42.158 "name": "BaseBdev2", 00:09:42.158 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:42.158 "is_configured": true, 00:09:42.158 "data_offset": 0, 00:09:42.158 "data_size": 65536 00:09:42.158 }, 00:09:42.158 { 00:09:42.158 "name": "BaseBdev3", 00:09:42.158 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:42.158 "is_configured": true, 00:09:42.158 "data_offset": 0, 00:09:42.158 "data_size": 65536 00:09:42.158 } 00:09:42.158 ] 00:09:42.158 }' 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.158 21:41:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.418 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.678 [2024-09-29 21:41:01.405446] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.678 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.678 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.678 "name": "Existed_Raid", 00:09:42.678 "aliases": [ 00:09:42.678 "de0e7d85-cf29-49e6-99e3-ff6cee5a2ff3" 00:09:42.678 ], 00:09:42.678 "product_name": "Raid Volume", 00:09:42.678 "block_size": 512, 00:09:42.678 "num_blocks": 65536, 00:09:42.678 "uuid": "de0e7d85-cf29-49e6-99e3-ff6cee5a2ff3", 00:09:42.678 "assigned_rate_limits": { 00:09:42.678 "rw_ios_per_sec": 0, 00:09:42.678 "rw_mbytes_per_sec": 0, 00:09:42.678 "r_mbytes_per_sec": 0, 00:09:42.678 "w_mbytes_per_sec": 0 00:09:42.678 }, 00:09:42.678 "claimed": false, 00:09:42.678 "zoned": false, 00:09:42.678 "supported_io_types": { 00:09:42.678 "read": true, 00:09:42.678 "write": true, 00:09:42.678 "unmap": false, 00:09:42.678 "flush": false, 00:09:42.678 "reset": true, 00:09:42.678 "nvme_admin": false, 00:09:42.678 "nvme_io": false, 00:09:42.678 "nvme_io_md": false, 00:09:42.678 "write_zeroes": true, 00:09:42.678 "zcopy": false, 00:09:42.678 "get_zone_info": false, 00:09:42.678 "zone_management": false, 00:09:42.678 "zone_append": false, 00:09:42.678 "compare": false, 00:09:42.678 "compare_and_write": false, 00:09:42.678 "abort": false, 00:09:42.678 "seek_hole": false, 00:09:42.678 "seek_data": false, 00:09:42.678 "copy": false, 00:09:42.678 "nvme_iov_md": false 00:09:42.678 }, 00:09:42.678 "memory_domains": [ 00:09:42.678 { 00:09:42.678 "dma_device_id": "system", 00:09:42.678 "dma_device_type": 1 00:09:42.678 }, 00:09:42.678 { 00:09:42.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.678 "dma_device_type": 2 00:09:42.678 }, 00:09:42.678 { 00:09:42.678 "dma_device_id": "system", 00:09:42.678 "dma_device_type": 1 00:09:42.678 }, 00:09:42.678 { 00:09:42.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.678 "dma_device_type": 2 00:09:42.678 }, 00:09:42.678 { 00:09:42.678 "dma_device_id": "system", 00:09:42.678 "dma_device_type": 1 00:09:42.678 }, 00:09:42.678 { 00:09:42.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.678 "dma_device_type": 2 00:09:42.678 } 00:09:42.678 ], 00:09:42.678 "driver_specific": { 00:09:42.678 "raid": { 00:09:42.678 "uuid": "de0e7d85-cf29-49e6-99e3-ff6cee5a2ff3", 00:09:42.678 "strip_size_kb": 0, 00:09:42.678 "state": "online", 00:09:42.678 "raid_level": "raid1", 00:09:42.678 "superblock": false, 00:09:42.678 "num_base_bdevs": 3, 00:09:42.678 "num_base_bdevs_discovered": 3, 00:09:42.678 "num_base_bdevs_operational": 3, 00:09:42.678 "base_bdevs_list": [ 00:09:42.678 { 00:09:42.678 "name": "NewBaseBdev", 00:09:42.678 "uuid": "ed3097b9-70bc-4ccc-ad98-86a63a97dcc1", 00:09:42.678 "is_configured": true, 00:09:42.678 "data_offset": 0, 00:09:42.678 "data_size": 65536 00:09:42.678 }, 00:09:42.678 { 00:09:42.678 "name": "BaseBdev2", 00:09:42.678 "uuid": "25e97ab4-384c-410d-81ad-13807df51d28", 00:09:42.678 "is_configured": true, 00:09:42.678 "data_offset": 0, 00:09:42.678 "data_size": 65536 00:09:42.678 }, 00:09:42.678 { 00:09:42.678 "name": "BaseBdev3", 00:09:42.678 "uuid": "29aa2f20-4afe-408a-9b7f-1d0052692a2c", 00:09:42.678 "is_configured": true, 00:09:42.678 "data_offset": 0, 00:09:42.678 "data_size": 65536 00:09:42.678 } 00:09:42.678 ] 00:09:42.678 } 00:09:42.678 } 00:09:42.678 }' 00:09:42.678 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.678 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:42.678 BaseBdev2 00:09:42.678 BaseBdev3' 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.679 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.939 [2024-09-29 21:41:01.660716] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.939 [2024-09-29 21:41:01.660748] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.939 [2024-09-29 21:41:01.660808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.939 [2024-09-29 21:41:01.661133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.939 [2024-09-29 21:41:01.661144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67468 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67468 ']' 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67468 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67468 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67468' 00:09:42.939 killing process with pid 67468 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67468 00:09:42.939 [2024-09-29 21:41:01.705484] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.939 21:41:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67468 00:09:43.199 [2024-09-29 21:41:02.026335] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:44.582 00:09:44.582 real 0m10.834s 00:09:44.582 user 0m16.820s 00:09:44.582 sys 0m2.032s 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.582 ************************************ 00:09:44.582 END TEST raid_state_function_test 00:09:44.582 ************************************ 00:09:44.582 21:41:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:44.582 21:41:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:44.582 21:41:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.582 21:41:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.582 ************************************ 00:09:44.582 START TEST raid_state_function_test_sb 00:09:44.582 ************************************ 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68096 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68096' 00:09:44.582 Process raid pid: 68096 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68096 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68096 ']' 00:09:44.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.582 21:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.583 21:41:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.583 [2024-09-29 21:41:03.546029] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:44.583 [2024-09-29 21:41:03.546153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.843 [2024-09-29 21:41:03.714789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.102 [2024-09-29 21:41:03.961017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.363 [2024-09-29 21:41:04.195306] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.363 [2024-09-29 21:41:04.195349] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.623 [2024-09-29 21:41:04.380752] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.623 [2024-09-29 21:41:04.380815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.623 [2024-09-29 21:41:04.380825] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.623 [2024-09-29 21:41:04.380835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.623 [2024-09-29 21:41:04.380841] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.623 [2024-09-29 21:41:04.380850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.623 "name": "Existed_Raid", 00:09:45.623 "uuid": "2a68e586-e24c-437f-841f-3e4e4bf58170", 00:09:45.623 "strip_size_kb": 0, 00:09:45.623 "state": "configuring", 00:09:45.623 "raid_level": "raid1", 00:09:45.623 "superblock": true, 00:09:45.623 "num_base_bdevs": 3, 00:09:45.623 "num_base_bdevs_discovered": 0, 00:09:45.623 "num_base_bdevs_operational": 3, 00:09:45.623 "base_bdevs_list": [ 00:09:45.623 { 00:09:45.623 "name": "BaseBdev1", 00:09:45.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.623 "is_configured": false, 00:09:45.623 "data_offset": 0, 00:09:45.623 "data_size": 0 00:09:45.623 }, 00:09:45.623 { 00:09:45.623 "name": "BaseBdev2", 00:09:45.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.623 "is_configured": false, 00:09:45.623 "data_offset": 0, 00:09:45.623 "data_size": 0 00:09:45.623 }, 00:09:45.623 { 00:09:45.623 "name": "BaseBdev3", 00:09:45.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.623 "is_configured": false, 00:09:45.623 "data_offset": 0, 00:09:45.623 "data_size": 0 00:09:45.623 } 00:09:45.623 ] 00:09:45.623 }' 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.623 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.883 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.883 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.883 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.883 [2024-09-29 21:41:04.851911] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.883 [2024-09-29 21:41:04.851949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:45.883 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.883 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.883 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.883 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.883 [2024-09-29 21:41:04.863928] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:45.883 [2024-09-29 21:41:04.864016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:45.883 [2024-09-29 21:41:04.864058] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.883 [2024-09-29 21:41:04.864083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.883 [2024-09-29 21:41:04.864101] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.883 [2024-09-29 21:41:04.864124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.144 [2024-09-29 21:41:04.929959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.144 BaseBdev1 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.144 [ 00:09:46.144 { 00:09:46.144 "name": "BaseBdev1", 00:09:46.144 "aliases": [ 00:09:46.144 "33392c11-0889-4350-bb80-21d1967add0e" 00:09:46.144 ], 00:09:46.144 "product_name": "Malloc disk", 00:09:46.144 "block_size": 512, 00:09:46.144 "num_blocks": 65536, 00:09:46.144 "uuid": "33392c11-0889-4350-bb80-21d1967add0e", 00:09:46.144 "assigned_rate_limits": { 00:09:46.144 "rw_ios_per_sec": 0, 00:09:46.144 "rw_mbytes_per_sec": 0, 00:09:46.144 "r_mbytes_per_sec": 0, 00:09:46.144 "w_mbytes_per_sec": 0 00:09:46.144 }, 00:09:46.144 "claimed": true, 00:09:46.144 "claim_type": "exclusive_write", 00:09:46.144 "zoned": false, 00:09:46.144 "supported_io_types": { 00:09:46.144 "read": true, 00:09:46.144 "write": true, 00:09:46.144 "unmap": true, 00:09:46.144 "flush": true, 00:09:46.144 "reset": true, 00:09:46.144 "nvme_admin": false, 00:09:46.144 "nvme_io": false, 00:09:46.144 "nvme_io_md": false, 00:09:46.144 "write_zeroes": true, 00:09:46.144 "zcopy": true, 00:09:46.144 "get_zone_info": false, 00:09:46.144 "zone_management": false, 00:09:46.144 "zone_append": false, 00:09:46.144 "compare": false, 00:09:46.144 "compare_and_write": false, 00:09:46.144 "abort": true, 00:09:46.144 "seek_hole": false, 00:09:46.144 "seek_data": false, 00:09:46.144 "copy": true, 00:09:46.144 "nvme_iov_md": false 00:09:46.144 }, 00:09:46.144 "memory_domains": [ 00:09:46.144 { 00:09:46.144 "dma_device_id": "system", 00:09:46.144 "dma_device_type": 1 00:09:46.144 }, 00:09:46.144 { 00:09:46.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.144 "dma_device_type": 2 00:09:46.144 } 00:09:46.144 ], 00:09:46.144 "driver_specific": {} 00:09:46.144 } 00:09:46.144 ] 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.144 21:41:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.144 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.144 "name": "Existed_Raid", 00:09:46.144 "uuid": "30ef0d02-78d1-4595-913b-e5459a94bcf6", 00:09:46.144 "strip_size_kb": 0, 00:09:46.144 "state": "configuring", 00:09:46.144 "raid_level": "raid1", 00:09:46.144 "superblock": true, 00:09:46.144 "num_base_bdevs": 3, 00:09:46.144 "num_base_bdevs_discovered": 1, 00:09:46.144 "num_base_bdevs_operational": 3, 00:09:46.144 "base_bdevs_list": [ 00:09:46.144 { 00:09:46.144 "name": "BaseBdev1", 00:09:46.144 "uuid": "33392c11-0889-4350-bb80-21d1967add0e", 00:09:46.144 "is_configured": true, 00:09:46.144 "data_offset": 2048, 00:09:46.144 "data_size": 63488 00:09:46.144 }, 00:09:46.144 { 00:09:46.144 "name": "BaseBdev2", 00:09:46.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.144 "is_configured": false, 00:09:46.144 "data_offset": 0, 00:09:46.144 "data_size": 0 00:09:46.144 }, 00:09:46.144 { 00:09:46.144 "name": "BaseBdev3", 00:09:46.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.144 "is_configured": false, 00:09:46.144 "data_offset": 0, 00:09:46.144 "data_size": 0 00:09:46.144 } 00:09:46.144 ] 00:09:46.144 }' 00:09:46.144 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.144 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.404 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:46.404 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.404 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.664 [2024-09-29 21:41:05.389175] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:46.664 [2024-09-29 21:41:05.389278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.664 [2024-09-29 21:41:05.401224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.664 [2024-09-29 21:41:05.403365] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.664 [2024-09-29 21:41:05.403458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.664 [2024-09-29 21:41:05.403472] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.664 [2024-09-29 21:41:05.403482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.664 "name": "Existed_Raid", 00:09:46.664 "uuid": "625e4402-0752-4e89-98e5-c27e3f12125a", 00:09:46.664 "strip_size_kb": 0, 00:09:46.664 "state": "configuring", 00:09:46.664 "raid_level": "raid1", 00:09:46.664 "superblock": true, 00:09:46.664 "num_base_bdevs": 3, 00:09:46.664 "num_base_bdevs_discovered": 1, 00:09:46.664 "num_base_bdevs_operational": 3, 00:09:46.664 "base_bdevs_list": [ 00:09:46.664 { 00:09:46.664 "name": "BaseBdev1", 00:09:46.664 "uuid": "33392c11-0889-4350-bb80-21d1967add0e", 00:09:46.664 "is_configured": true, 00:09:46.664 "data_offset": 2048, 00:09:46.664 "data_size": 63488 00:09:46.664 }, 00:09:46.664 { 00:09:46.664 "name": "BaseBdev2", 00:09:46.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.664 "is_configured": false, 00:09:46.664 "data_offset": 0, 00:09:46.664 "data_size": 0 00:09:46.664 }, 00:09:46.664 { 00:09:46.664 "name": "BaseBdev3", 00:09:46.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.664 "is_configured": false, 00:09:46.664 "data_offset": 0, 00:09:46.664 "data_size": 0 00:09:46.664 } 00:09:46.664 ] 00:09:46.664 }' 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.664 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.924 [2024-09-29 21:41:05.880549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.924 BaseBdev2 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.924 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.924 [ 00:09:46.924 { 00:09:46.924 "name": "BaseBdev2", 00:09:46.924 "aliases": [ 00:09:46.924 "6b0d8b86-6edb-449d-9f81-44064147f3af" 00:09:46.924 ], 00:09:46.924 "product_name": "Malloc disk", 00:09:46.924 "block_size": 512, 00:09:46.924 "num_blocks": 65536, 00:09:47.184 "uuid": "6b0d8b86-6edb-449d-9f81-44064147f3af", 00:09:47.184 "assigned_rate_limits": { 00:09:47.184 "rw_ios_per_sec": 0, 00:09:47.184 "rw_mbytes_per_sec": 0, 00:09:47.184 "r_mbytes_per_sec": 0, 00:09:47.184 "w_mbytes_per_sec": 0 00:09:47.184 }, 00:09:47.184 "claimed": true, 00:09:47.184 "claim_type": "exclusive_write", 00:09:47.184 "zoned": false, 00:09:47.184 "supported_io_types": { 00:09:47.184 "read": true, 00:09:47.184 "write": true, 00:09:47.184 "unmap": true, 00:09:47.184 "flush": true, 00:09:47.184 "reset": true, 00:09:47.184 "nvme_admin": false, 00:09:47.184 "nvme_io": false, 00:09:47.184 "nvme_io_md": false, 00:09:47.184 "write_zeroes": true, 00:09:47.184 "zcopy": true, 00:09:47.184 "get_zone_info": false, 00:09:47.184 "zone_management": false, 00:09:47.184 "zone_append": false, 00:09:47.184 "compare": false, 00:09:47.184 "compare_and_write": false, 00:09:47.184 "abort": true, 00:09:47.184 "seek_hole": false, 00:09:47.184 "seek_data": false, 00:09:47.184 "copy": true, 00:09:47.184 "nvme_iov_md": false 00:09:47.184 }, 00:09:47.184 "memory_domains": [ 00:09:47.184 { 00:09:47.184 "dma_device_id": "system", 00:09:47.184 "dma_device_type": 1 00:09:47.184 }, 00:09:47.184 { 00:09:47.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.184 "dma_device_type": 2 00:09:47.184 } 00:09:47.184 ], 00:09:47.184 "driver_specific": {} 00:09:47.184 } 00:09:47.184 ] 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.184 "name": "Existed_Raid", 00:09:47.184 "uuid": "625e4402-0752-4e89-98e5-c27e3f12125a", 00:09:47.184 "strip_size_kb": 0, 00:09:47.184 "state": "configuring", 00:09:47.184 "raid_level": "raid1", 00:09:47.184 "superblock": true, 00:09:47.184 "num_base_bdevs": 3, 00:09:47.184 "num_base_bdevs_discovered": 2, 00:09:47.184 "num_base_bdevs_operational": 3, 00:09:47.184 "base_bdevs_list": [ 00:09:47.184 { 00:09:47.184 "name": "BaseBdev1", 00:09:47.184 "uuid": "33392c11-0889-4350-bb80-21d1967add0e", 00:09:47.184 "is_configured": true, 00:09:47.184 "data_offset": 2048, 00:09:47.184 "data_size": 63488 00:09:47.184 }, 00:09:47.184 { 00:09:47.184 "name": "BaseBdev2", 00:09:47.184 "uuid": "6b0d8b86-6edb-449d-9f81-44064147f3af", 00:09:47.184 "is_configured": true, 00:09:47.184 "data_offset": 2048, 00:09:47.184 "data_size": 63488 00:09:47.184 }, 00:09:47.184 { 00:09:47.184 "name": "BaseBdev3", 00:09:47.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.184 "is_configured": false, 00:09:47.184 "data_offset": 0, 00:09:47.184 "data_size": 0 00:09:47.184 } 00:09:47.184 ] 00:09:47.184 }' 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.184 21:41:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.444 [2024-09-29 21:41:06.368011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.444 [2024-09-29 21:41:06.368301] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:47.444 [2024-09-29 21:41:06.368329] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.444 BaseBdev3 00:09:47.444 [2024-09-29 21:41:06.368877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:47.444 [2024-09-29 21:41:06.369046] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:47.444 [2024-09-29 21:41:06.369056] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:47.444 [2024-09-29 21:41:06.369246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.444 [ 00:09:47.444 { 00:09:47.444 "name": "BaseBdev3", 00:09:47.444 "aliases": [ 00:09:47.444 "1dedab11-622c-43ac-9a1c-8f1c69033cc9" 00:09:47.444 ], 00:09:47.444 "product_name": "Malloc disk", 00:09:47.444 "block_size": 512, 00:09:47.444 "num_blocks": 65536, 00:09:47.444 "uuid": "1dedab11-622c-43ac-9a1c-8f1c69033cc9", 00:09:47.444 "assigned_rate_limits": { 00:09:47.444 "rw_ios_per_sec": 0, 00:09:47.444 "rw_mbytes_per_sec": 0, 00:09:47.444 "r_mbytes_per_sec": 0, 00:09:47.444 "w_mbytes_per_sec": 0 00:09:47.444 }, 00:09:47.444 "claimed": true, 00:09:47.444 "claim_type": "exclusive_write", 00:09:47.444 "zoned": false, 00:09:47.444 "supported_io_types": { 00:09:47.444 "read": true, 00:09:47.444 "write": true, 00:09:47.444 "unmap": true, 00:09:47.444 "flush": true, 00:09:47.444 "reset": true, 00:09:47.444 "nvme_admin": false, 00:09:47.444 "nvme_io": false, 00:09:47.444 "nvme_io_md": false, 00:09:47.444 "write_zeroes": true, 00:09:47.444 "zcopy": true, 00:09:47.444 "get_zone_info": false, 00:09:47.444 "zone_management": false, 00:09:47.444 "zone_append": false, 00:09:47.444 "compare": false, 00:09:47.444 "compare_and_write": false, 00:09:47.444 "abort": true, 00:09:47.444 "seek_hole": false, 00:09:47.444 "seek_data": false, 00:09:47.444 "copy": true, 00:09:47.444 "nvme_iov_md": false 00:09:47.444 }, 00:09:47.444 "memory_domains": [ 00:09:47.444 { 00:09:47.444 "dma_device_id": "system", 00:09:47.444 "dma_device_type": 1 00:09:47.444 }, 00:09:47.444 { 00:09:47.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.444 "dma_device_type": 2 00:09:47.444 } 00:09:47.444 ], 00:09:47.444 "driver_specific": {} 00:09:47.444 } 00:09:47.444 ] 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.444 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.445 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.445 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.445 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.445 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.445 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.445 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.445 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.445 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.445 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.704 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.704 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.704 "name": "Existed_Raid", 00:09:47.704 "uuid": "625e4402-0752-4e89-98e5-c27e3f12125a", 00:09:47.704 "strip_size_kb": 0, 00:09:47.704 "state": "online", 00:09:47.704 "raid_level": "raid1", 00:09:47.704 "superblock": true, 00:09:47.704 "num_base_bdevs": 3, 00:09:47.704 "num_base_bdevs_discovered": 3, 00:09:47.704 "num_base_bdevs_operational": 3, 00:09:47.704 "base_bdevs_list": [ 00:09:47.704 { 00:09:47.704 "name": "BaseBdev1", 00:09:47.704 "uuid": "33392c11-0889-4350-bb80-21d1967add0e", 00:09:47.704 "is_configured": true, 00:09:47.704 "data_offset": 2048, 00:09:47.704 "data_size": 63488 00:09:47.704 }, 00:09:47.704 { 00:09:47.704 "name": "BaseBdev2", 00:09:47.704 "uuid": "6b0d8b86-6edb-449d-9f81-44064147f3af", 00:09:47.704 "is_configured": true, 00:09:47.704 "data_offset": 2048, 00:09:47.704 "data_size": 63488 00:09:47.704 }, 00:09:47.704 { 00:09:47.704 "name": "BaseBdev3", 00:09:47.704 "uuid": "1dedab11-622c-43ac-9a1c-8f1c69033cc9", 00:09:47.704 "is_configured": true, 00:09:47.704 "data_offset": 2048, 00:09:47.704 "data_size": 63488 00:09:47.704 } 00:09:47.704 ] 00:09:47.704 }' 00:09:47.704 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.704 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.964 [2024-09-29 21:41:06.851482] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.964 "name": "Existed_Raid", 00:09:47.964 "aliases": [ 00:09:47.964 "625e4402-0752-4e89-98e5-c27e3f12125a" 00:09:47.964 ], 00:09:47.964 "product_name": "Raid Volume", 00:09:47.964 "block_size": 512, 00:09:47.964 "num_blocks": 63488, 00:09:47.964 "uuid": "625e4402-0752-4e89-98e5-c27e3f12125a", 00:09:47.964 "assigned_rate_limits": { 00:09:47.964 "rw_ios_per_sec": 0, 00:09:47.964 "rw_mbytes_per_sec": 0, 00:09:47.964 "r_mbytes_per_sec": 0, 00:09:47.964 "w_mbytes_per_sec": 0 00:09:47.964 }, 00:09:47.964 "claimed": false, 00:09:47.964 "zoned": false, 00:09:47.964 "supported_io_types": { 00:09:47.964 "read": true, 00:09:47.964 "write": true, 00:09:47.964 "unmap": false, 00:09:47.964 "flush": false, 00:09:47.964 "reset": true, 00:09:47.964 "nvme_admin": false, 00:09:47.964 "nvme_io": false, 00:09:47.964 "nvme_io_md": false, 00:09:47.964 "write_zeroes": true, 00:09:47.964 "zcopy": false, 00:09:47.964 "get_zone_info": false, 00:09:47.964 "zone_management": false, 00:09:47.964 "zone_append": false, 00:09:47.964 "compare": false, 00:09:47.964 "compare_and_write": false, 00:09:47.964 "abort": false, 00:09:47.964 "seek_hole": false, 00:09:47.964 "seek_data": false, 00:09:47.964 "copy": false, 00:09:47.964 "nvme_iov_md": false 00:09:47.964 }, 00:09:47.964 "memory_domains": [ 00:09:47.964 { 00:09:47.964 "dma_device_id": "system", 00:09:47.964 "dma_device_type": 1 00:09:47.964 }, 00:09:47.964 { 00:09:47.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.964 "dma_device_type": 2 00:09:47.964 }, 00:09:47.964 { 00:09:47.964 "dma_device_id": "system", 00:09:47.964 "dma_device_type": 1 00:09:47.964 }, 00:09:47.964 { 00:09:47.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.964 "dma_device_type": 2 00:09:47.964 }, 00:09:47.964 { 00:09:47.964 "dma_device_id": "system", 00:09:47.964 "dma_device_type": 1 00:09:47.964 }, 00:09:47.964 { 00:09:47.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.964 "dma_device_type": 2 00:09:47.964 } 00:09:47.964 ], 00:09:47.964 "driver_specific": { 00:09:47.964 "raid": { 00:09:47.964 "uuid": "625e4402-0752-4e89-98e5-c27e3f12125a", 00:09:47.964 "strip_size_kb": 0, 00:09:47.964 "state": "online", 00:09:47.964 "raid_level": "raid1", 00:09:47.964 "superblock": true, 00:09:47.964 "num_base_bdevs": 3, 00:09:47.964 "num_base_bdevs_discovered": 3, 00:09:47.964 "num_base_bdevs_operational": 3, 00:09:47.964 "base_bdevs_list": [ 00:09:47.964 { 00:09:47.964 "name": "BaseBdev1", 00:09:47.964 "uuid": "33392c11-0889-4350-bb80-21d1967add0e", 00:09:47.964 "is_configured": true, 00:09:47.964 "data_offset": 2048, 00:09:47.964 "data_size": 63488 00:09:47.964 }, 00:09:47.964 { 00:09:47.964 "name": "BaseBdev2", 00:09:47.964 "uuid": "6b0d8b86-6edb-449d-9f81-44064147f3af", 00:09:47.964 "is_configured": true, 00:09:47.964 "data_offset": 2048, 00:09:47.964 "data_size": 63488 00:09:47.964 }, 00:09:47.964 { 00:09:47.964 "name": "BaseBdev3", 00:09:47.964 "uuid": "1dedab11-622c-43ac-9a1c-8f1c69033cc9", 00:09:47.964 "is_configured": true, 00:09:47.964 "data_offset": 2048, 00:09:47.964 "data_size": 63488 00:09:47.964 } 00:09:47.964 ] 00:09:47.964 } 00:09:47.964 } 00:09:47.964 }' 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:47.964 BaseBdev2 00:09:47.964 BaseBdev3' 00:09:47.964 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.224 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:48.224 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.224 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:48.224 21:41:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.224 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.224 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.224 21:41:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.224 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.224 [2024-09-29 21:41:07.114782] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.484 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.484 "name": "Existed_Raid", 00:09:48.484 "uuid": "625e4402-0752-4e89-98e5-c27e3f12125a", 00:09:48.484 "strip_size_kb": 0, 00:09:48.484 "state": "online", 00:09:48.484 "raid_level": "raid1", 00:09:48.484 "superblock": true, 00:09:48.484 "num_base_bdevs": 3, 00:09:48.484 "num_base_bdevs_discovered": 2, 00:09:48.484 "num_base_bdevs_operational": 2, 00:09:48.484 "base_bdevs_list": [ 00:09:48.484 { 00:09:48.484 "name": null, 00:09:48.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.484 "is_configured": false, 00:09:48.484 "data_offset": 0, 00:09:48.484 "data_size": 63488 00:09:48.484 }, 00:09:48.484 { 00:09:48.484 "name": "BaseBdev2", 00:09:48.484 "uuid": "6b0d8b86-6edb-449d-9f81-44064147f3af", 00:09:48.484 "is_configured": true, 00:09:48.484 "data_offset": 2048, 00:09:48.484 "data_size": 63488 00:09:48.484 }, 00:09:48.484 { 00:09:48.484 "name": "BaseBdev3", 00:09:48.484 "uuid": "1dedab11-622c-43ac-9a1c-8f1c69033cc9", 00:09:48.484 "is_configured": true, 00:09:48.485 "data_offset": 2048, 00:09:48.485 "data_size": 63488 00:09:48.485 } 00:09:48.485 ] 00:09:48.485 }' 00:09:48.485 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.485 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.744 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:48.744 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:48.744 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:48.745 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.745 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.745 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.745 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.745 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:48.745 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:48.745 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:48.745 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.745 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.745 [2024-09-29 21:41:07.640372] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.005 [2024-09-29 21:41:07.795801] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.005 [2024-09-29 21:41:07.796017] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.005 [2024-09-29 21:41:07.893697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.005 [2024-09-29 21:41:07.893824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.005 [2024-09-29 21:41:07.893865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.005 21:41:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.266 BaseBdev2 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.266 [ 00:09:49.266 { 00:09:49.266 "name": "BaseBdev2", 00:09:49.266 "aliases": [ 00:09:49.266 "fe5653a2-7e78-469d-b3c0-d296ee33024a" 00:09:49.266 ], 00:09:49.266 "product_name": "Malloc disk", 00:09:49.266 "block_size": 512, 00:09:49.266 "num_blocks": 65536, 00:09:49.266 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:49.266 "assigned_rate_limits": { 00:09:49.266 "rw_ios_per_sec": 0, 00:09:49.266 "rw_mbytes_per_sec": 0, 00:09:49.266 "r_mbytes_per_sec": 0, 00:09:49.266 "w_mbytes_per_sec": 0 00:09:49.266 }, 00:09:49.266 "claimed": false, 00:09:49.266 "zoned": false, 00:09:49.266 "supported_io_types": { 00:09:49.266 "read": true, 00:09:49.266 "write": true, 00:09:49.266 "unmap": true, 00:09:49.266 "flush": true, 00:09:49.266 "reset": true, 00:09:49.266 "nvme_admin": false, 00:09:49.266 "nvme_io": false, 00:09:49.266 "nvme_io_md": false, 00:09:49.266 "write_zeroes": true, 00:09:49.266 "zcopy": true, 00:09:49.266 "get_zone_info": false, 00:09:49.266 "zone_management": false, 00:09:49.266 "zone_append": false, 00:09:49.266 "compare": false, 00:09:49.266 "compare_and_write": false, 00:09:49.266 "abort": true, 00:09:49.266 "seek_hole": false, 00:09:49.266 "seek_data": false, 00:09:49.266 "copy": true, 00:09:49.266 "nvme_iov_md": false 00:09:49.266 }, 00:09:49.266 "memory_domains": [ 00:09:49.266 { 00:09:49.266 "dma_device_id": "system", 00:09:49.266 "dma_device_type": 1 00:09:49.266 }, 00:09:49.266 { 00:09:49.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.266 "dma_device_type": 2 00:09:49.266 } 00:09:49.266 ], 00:09:49.266 "driver_specific": {} 00:09:49.266 } 00:09:49.266 ] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.266 BaseBdev3 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.266 [ 00:09:49.266 { 00:09:49.266 "name": "BaseBdev3", 00:09:49.266 "aliases": [ 00:09:49.266 "5539d37e-2423-40e7-8c95-12cd9eaab895" 00:09:49.266 ], 00:09:49.266 "product_name": "Malloc disk", 00:09:49.266 "block_size": 512, 00:09:49.266 "num_blocks": 65536, 00:09:49.266 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:49.266 "assigned_rate_limits": { 00:09:49.266 "rw_ios_per_sec": 0, 00:09:49.266 "rw_mbytes_per_sec": 0, 00:09:49.266 "r_mbytes_per_sec": 0, 00:09:49.266 "w_mbytes_per_sec": 0 00:09:49.266 }, 00:09:49.266 "claimed": false, 00:09:49.266 "zoned": false, 00:09:49.266 "supported_io_types": { 00:09:49.266 "read": true, 00:09:49.266 "write": true, 00:09:49.266 "unmap": true, 00:09:49.266 "flush": true, 00:09:49.266 "reset": true, 00:09:49.266 "nvme_admin": false, 00:09:49.266 "nvme_io": false, 00:09:49.266 "nvme_io_md": false, 00:09:49.266 "write_zeroes": true, 00:09:49.266 "zcopy": true, 00:09:49.266 "get_zone_info": false, 00:09:49.266 "zone_management": false, 00:09:49.266 "zone_append": false, 00:09:49.266 "compare": false, 00:09:49.266 "compare_and_write": false, 00:09:49.266 "abort": true, 00:09:49.266 "seek_hole": false, 00:09:49.266 "seek_data": false, 00:09:49.266 "copy": true, 00:09:49.266 "nvme_iov_md": false 00:09:49.266 }, 00:09:49.266 "memory_domains": [ 00:09:49.266 { 00:09:49.266 "dma_device_id": "system", 00:09:49.266 "dma_device_type": 1 00:09:49.266 }, 00:09:49.266 { 00:09:49.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.266 "dma_device_type": 2 00:09:49.266 } 00:09:49.266 ], 00:09:49.266 "driver_specific": {} 00:09:49.266 } 00:09:49.266 ] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.266 [2024-09-29 21:41:08.139913] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:49.266 [2024-09-29 21:41:08.139969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:49.266 [2024-09-29 21:41:08.139988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.266 [2024-09-29 21:41:08.142095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.266 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.267 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.267 "name": "Existed_Raid", 00:09:49.267 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:49.267 "strip_size_kb": 0, 00:09:49.267 "state": "configuring", 00:09:49.267 "raid_level": "raid1", 00:09:49.267 "superblock": true, 00:09:49.267 "num_base_bdevs": 3, 00:09:49.267 "num_base_bdevs_discovered": 2, 00:09:49.267 "num_base_bdevs_operational": 3, 00:09:49.267 "base_bdevs_list": [ 00:09:49.267 { 00:09:49.267 "name": "BaseBdev1", 00:09:49.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.267 "is_configured": false, 00:09:49.267 "data_offset": 0, 00:09:49.267 "data_size": 0 00:09:49.267 }, 00:09:49.267 { 00:09:49.267 "name": "BaseBdev2", 00:09:49.267 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:49.267 "is_configured": true, 00:09:49.267 "data_offset": 2048, 00:09:49.267 "data_size": 63488 00:09:49.267 }, 00:09:49.267 { 00:09:49.267 "name": "BaseBdev3", 00:09:49.267 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:49.267 "is_configured": true, 00:09:49.267 "data_offset": 2048, 00:09:49.267 "data_size": 63488 00:09:49.267 } 00:09:49.267 ] 00:09:49.267 }' 00:09:49.267 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.267 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.835 [2024-09-29 21:41:08.523208] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.835 "name": "Existed_Raid", 00:09:49.835 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:49.835 "strip_size_kb": 0, 00:09:49.835 "state": "configuring", 00:09:49.835 "raid_level": "raid1", 00:09:49.835 "superblock": true, 00:09:49.835 "num_base_bdevs": 3, 00:09:49.835 "num_base_bdevs_discovered": 1, 00:09:49.835 "num_base_bdevs_operational": 3, 00:09:49.835 "base_bdevs_list": [ 00:09:49.835 { 00:09:49.835 "name": "BaseBdev1", 00:09:49.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.835 "is_configured": false, 00:09:49.835 "data_offset": 0, 00:09:49.835 "data_size": 0 00:09:49.835 }, 00:09:49.835 { 00:09:49.835 "name": null, 00:09:49.835 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:49.835 "is_configured": false, 00:09:49.835 "data_offset": 0, 00:09:49.835 "data_size": 63488 00:09:49.835 }, 00:09:49.835 { 00:09:49.835 "name": "BaseBdev3", 00:09:49.835 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:49.835 "is_configured": true, 00:09:49.835 "data_offset": 2048, 00:09:49.835 "data_size": 63488 00:09:49.835 } 00:09:49.835 ] 00:09:49.835 }' 00:09:49.835 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.836 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.095 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.095 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.095 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.095 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.095 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.095 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:50.095 21:41:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:50.095 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.095 21:41:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.095 [2024-09-29 21:41:09.028241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.095 BaseBdev1 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.095 [ 00:09:50.095 { 00:09:50.095 "name": "BaseBdev1", 00:09:50.095 "aliases": [ 00:09:50.095 "94e5766f-31c7-4872-8ebc-475007ef1763" 00:09:50.095 ], 00:09:50.095 "product_name": "Malloc disk", 00:09:50.095 "block_size": 512, 00:09:50.095 "num_blocks": 65536, 00:09:50.095 "uuid": "94e5766f-31c7-4872-8ebc-475007ef1763", 00:09:50.095 "assigned_rate_limits": { 00:09:50.095 "rw_ios_per_sec": 0, 00:09:50.095 "rw_mbytes_per_sec": 0, 00:09:50.095 "r_mbytes_per_sec": 0, 00:09:50.095 "w_mbytes_per_sec": 0 00:09:50.095 }, 00:09:50.095 "claimed": true, 00:09:50.095 "claim_type": "exclusive_write", 00:09:50.095 "zoned": false, 00:09:50.095 "supported_io_types": { 00:09:50.095 "read": true, 00:09:50.095 "write": true, 00:09:50.095 "unmap": true, 00:09:50.095 "flush": true, 00:09:50.095 "reset": true, 00:09:50.095 "nvme_admin": false, 00:09:50.095 "nvme_io": false, 00:09:50.095 "nvme_io_md": false, 00:09:50.095 "write_zeroes": true, 00:09:50.095 "zcopy": true, 00:09:50.095 "get_zone_info": false, 00:09:50.095 "zone_management": false, 00:09:50.095 "zone_append": false, 00:09:50.095 "compare": false, 00:09:50.095 "compare_and_write": false, 00:09:50.095 "abort": true, 00:09:50.095 "seek_hole": false, 00:09:50.095 "seek_data": false, 00:09:50.095 "copy": true, 00:09:50.095 "nvme_iov_md": false 00:09:50.095 }, 00:09:50.095 "memory_domains": [ 00:09:50.095 { 00:09:50.095 "dma_device_id": "system", 00:09:50.095 "dma_device_type": 1 00:09:50.095 }, 00:09:50.095 { 00:09:50.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.095 "dma_device_type": 2 00:09:50.095 } 00:09:50.095 ], 00:09:50.095 "driver_specific": {} 00:09:50.095 } 00:09:50.095 ] 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.095 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.096 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.096 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.096 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.096 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.355 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.355 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.355 "name": "Existed_Raid", 00:09:50.355 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:50.355 "strip_size_kb": 0, 00:09:50.355 "state": "configuring", 00:09:50.355 "raid_level": "raid1", 00:09:50.355 "superblock": true, 00:09:50.355 "num_base_bdevs": 3, 00:09:50.355 "num_base_bdevs_discovered": 2, 00:09:50.355 "num_base_bdevs_operational": 3, 00:09:50.355 "base_bdevs_list": [ 00:09:50.355 { 00:09:50.355 "name": "BaseBdev1", 00:09:50.355 "uuid": "94e5766f-31c7-4872-8ebc-475007ef1763", 00:09:50.355 "is_configured": true, 00:09:50.355 "data_offset": 2048, 00:09:50.355 "data_size": 63488 00:09:50.355 }, 00:09:50.355 { 00:09:50.355 "name": null, 00:09:50.355 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:50.355 "is_configured": false, 00:09:50.355 "data_offset": 0, 00:09:50.355 "data_size": 63488 00:09:50.355 }, 00:09:50.355 { 00:09:50.355 "name": "BaseBdev3", 00:09:50.355 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:50.355 "is_configured": true, 00:09:50.355 "data_offset": 2048, 00:09:50.355 "data_size": 63488 00:09:50.355 } 00:09:50.355 ] 00:09:50.355 }' 00:09:50.355 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.355 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.613 [2024-09-29 21:41:09.563354] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.613 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.614 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.873 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.873 "name": "Existed_Raid", 00:09:50.873 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:50.873 "strip_size_kb": 0, 00:09:50.873 "state": "configuring", 00:09:50.873 "raid_level": "raid1", 00:09:50.873 "superblock": true, 00:09:50.873 "num_base_bdevs": 3, 00:09:50.873 "num_base_bdevs_discovered": 1, 00:09:50.873 "num_base_bdevs_operational": 3, 00:09:50.873 "base_bdevs_list": [ 00:09:50.873 { 00:09:50.873 "name": "BaseBdev1", 00:09:50.873 "uuid": "94e5766f-31c7-4872-8ebc-475007ef1763", 00:09:50.873 "is_configured": true, 00:09:50.873 "data_offset": 2048, 00:09:50.873 "data_size": 63488 00:09:50.873 }, 00:09:50.873 { 00:09:50.873 "name": null, 00:09:50.873 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:50.873 "is_configured": false, 00:09:50.873 "data_offset": 0, 00:09:50.873 "data_size": 63488 00:09:50.873 }, 00:09:50.873 { 00:09:50.873 "name": null, 00:09:50.873 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:50.873 "is_configured": false, 00:09:50.873 "data_offset": 0, 00:09:50.873 "data_size": 63488 00:09:50.873 } 00:09:50.873 ] 00:09:50.873 }' 00:09:50.873 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.873 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.133 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.133 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.133 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.133 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.133 21:41:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.133 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:51.133 21:41:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.133 [2024-09-29 21:41:10.006605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.133 "name": "Existed_Raid", 00:09:51.133 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:51.133 "strip_size_kb": 0, 00:09:51.133 "state": "configuring", 00:09:51.133 "raid_level": "raid1", 00:09:51.133 "superblock": true, 00:09:51.133 "num_base_bdevs": 3, 00:09:51.133 "num_base_bdevs_discovered": 2, 00:09:51.133 "num_base_bdevs_operational": 3, 00:09:51.133 "base_bdevs_list": [ 00:09:51.133 { 00:09:51.133 "name": "BaseBdev1", 00:09:51.133 "uuid": "94e5766f-31c7-4872-8ebc-475007ef1763", 00:09:51.133 "is_configured": true, 00:09:51.133 "data_offset": 2048, 00:09:51.133 "data_size": 63488 00:09:51.133 }, 00:09:51.133 { 00:09:51.133 "name": null, 00:09:51.133 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:51.133 "is_configured": false, 00:09:51.133 "data_offset": 0, 00:09:51.133 "data_size": 63488 00:09:51.133 }, 00:09:51.133 { 00:09:51.133 "name": "BaseBdev3", 00:09:51.133 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:51.133 "is_configured": true, 00:09:51.133 "data_offset": 2048, 00:09:51.133 "data_size": 63488 00:09:51.133 } 00:09:51.133 ] 00:09:51.133 }' 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.133 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.703 [2024-09-29 21:41:10.493817] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.703 "name": "Existed_Raid", 00:09:51.703 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:51.703 "strip_size_kb": 0, 00:09:51.703 "state": "configuring", 00:09:51.703 "raid_level": "raid1", 00:09:51.703 "superblock": true, 00:09:51.703 "num_base_bdevs": 3, 00:09:51.703 "num_base_bdevs_discovered": 1, 00:09:51.703 "num_base_bdevs_operational": 3, 00:09:51.703 "base_bdevs_list": [ 00:09:51.703 { 00:09:51.703 "name": null, 00:09:51.703 "uuid": "94e5766f-31c7-4872-8ebc-475007ef1763", 00:09:51.703 "is_configured": false, 00:09:51.703 "data_offset": 0, 00:09:51.703 "data_size": 63488 00:09:51.703 }, 00:09:51.703 { 00:09:51.703 "name": null, 00:09:51.703 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:51.703 "is_configured": false, 00:09:51.703 "data_offset": 0, 00:09:51.703 "data_size": 63488 00:09:51.703 }, 00:09:51.703 { 00:09:51.703 "name": "BaseBdev3", 00:09:51.703 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:51.703 "is_configured": true, 00:09:51.703 "data_offset": 2048, 00:09:51.703 "data_size": 63488 00:09:51.703 } 00:09:51.703 ] 00:09:51.703 }' 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.703 21:41:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.272 [2024-09-29 21:41:11.048417] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.272 "name": "Existed_Raid", 00:09:52.272 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:52.272 "strip_size_kb": 0, 00:09:52.272 "state": "configuring", 00:09:52.272 "raid_level": "raid1", 00:09:52.272 "superblock": true, 00:09:52.272 "num_base_bdevs": 3, 00:09:52.272 "num_base_bdevs_discovered": 2, 00:09:52.272 "num_base_bdevs_operational": 3, 00:09:52.272 "base_bdevs_list": [ 00:09:52.272 { 00:09:52.272 "name": null, 00:09:52.272 "uuid": "94e5766f-31c7-4872-8ebc-475007ef1763", 00:09:52.272 "is_configured": false, 00:09:52.272 "data_offset": 0, 00:09:52.272 "data_size": 63488 00:09:52.272 }, 00:09:52.272 { 00:09:52.272 "name": "BaseBdev2", 00:09:52.272 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:52.272 "is_configured": true, 00:09:52.272 "data_offset": 2048, 00:09:52.272 "data_size": 63488 00:09:52.272 }, 00:09:52.272 { 00:09:52.272 "name": "BaseBdev3", 00:09:52.272 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:52.272 "is_configured": true, 00:09:52.272 "data_offset": 2048, 00:09:52.272 "data_size": 63488 00:09:52.272 } 00:09:52.272 ] 00:09:52.272 }' 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.272 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.532 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.532 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.532 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.532 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.532 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 94e5766f-31c7-4872-8ebc-475007ef1763 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.792 [2024-09-29 21:41:11.593484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:52.792 [2024-09-29 21:41:11.593760] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:52.792 [2024-09-29 21:41:11.593775] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:52.792 [2024-09-29 21:41:11.594088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:52.792 [2024-09-29 21:41:11.594260] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:52.792 [2024-09-29 21:41:11.594274] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:52.792 NewBaseBdev 00:09:52.792 [2024-09-29 21:41:11.594426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.792 [ 00:09:52.792 { 00:09:52.792 "name": "NewBaseBdev", 00:09:52.792 "aliases": [ 00:09:52.792 "94e5766f-31c7-4872-8ebc-475007ef1763" 00:09:52.792 ], 00:09:52.792 "product_name": "Malloc disk", 00:09:52.792 "block_size": 512, 00:09:52.792 "num_blocks": 65536, 00:09:52.792 "uuid": "94e5766f-31c7-4872-8ebc-475007ef1763", 00:09:52.792 "assigned_rate_limits": { 00:09:52.792 "rw_ios_per_sec": 0, 00:09:52.792 "rw_mbytes_per_sec": 0, 00:09:52.792 "r_mbytes_per_sec": 0, 00:09:52.792 "w_mbytes_per_sec": 0 00:09:52.792 }, 00:09:52.792 "claimed": true, 00:09:52.792 "claim_type": "exclusive_write", 00:09:52.792 "zoned": false, 00:09:52.792 "supported_io_types": { 00:09:52.792 "read": true, 00:09:52.792 "write": true, 00:09:52.792 "unmap": true, 00:09:52.792 "flush": true, 00:09:52.792 "reset": true, 00:09:52.792 "nvme_admin": false, 00:09:52.792 "nvme_io": false, 00:09:52.792 "nvme_io_md": false, 00:09:52.792 "write_zeroes": true, 00:09:52.792 "zcopy": true, 00:09:52.792 "get_zone_info": false, 00:09:52.792 "zone_management": false, 00:09:52.792 "zone_append": false, 00:09:52.792 "compare": false, 00:09:52.792 "compare_and_write": false, 00:09:52.792 "abort": true, 00:09:52.792 "seek_hole": false, 00:09:52.792 "seek_data": false, 00:09:52.792 "copy": true, 00:09:52.792 "nvme_iov_md": false 00:09:52.792 }, 00:09:52.792 "memory_domains": [ 00:09:52.792 { 00:09:52.792 "dma_device_id": "system", 00:09:52.792 "dma_device_type": 1 00:09:52.792 }, 00:09:52.792 { 00:09:52.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.792 "dma_device_type": 2 00:09:52.792 } 00:09:52.792 ], 00:09:52.792 "driver_specific": {} 00:09:52.792 } 00:09:52.792 ] 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.792 "name": "Existed_Raid", 00:09:52.792 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:52.792 "strip_size_kb": 0, 00:09:52.792 "state": "online", 00:09:52.792 "raid_level": "raid1", 00:09:52.792 "superblock": true, 00:09:52.792 "num_base_bdevs": 3, 00:09:52.792 "num_base_bdevs_discovered": 3, 00:09:52.792 "num_base_bdevs_operational": 3, 00:09:52.792 "base_bdevs_list": [ 00:09:52.792 { 00:09:52.792 "name": "NewBaseBdev", 00:09:52.792 "uuid": "94e5766f-31c7-4872-8ebc-475007ef1763", 00:09:52.792 "is_configured": true, 00:09:52.792 "data_offset": 2048, 00:09:52.792 "data_size": 63488 00:09:52.792 }, 00:09:52.792 { 00:09:52.792 "name": "BaseBdev2", 00:09:52.792 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:52.792 "is_configured": true, 00:09:52.792 "data_offset": 2048, 00:09:52.792 "data_size": 63488 00:09:52.792 }, 00:09:52.792 { 00:09:52.792 "name": "BaseBdev3", 00:09:52.792 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:52.792 "is_configured": true, 00:09:52.792 "data_offset": 2048, 00:09:52.792 "data_size": 63488 00:09:52.792 } 00:09:52.792 ] 00:09:52.792 }' 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.792 21:41:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.363 [2024-09-29 21:41:12.080956] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.363 "name": "Existed_Raid", 00:09:53.363 "aliases": [ 00:09:53.363 "1fae4a99-3e41-4a7a-ac23-72d7f93e3476" 00:09:53.363 ], 00:09:53.363 "product_name": "Raid Volume", 00:09:53.363 "block_size": 512, 00:09:53.363 "num_blocks": 63488, 00:09:53.363 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:53.363 "assigned_rate_limits": { 00:09:53.363 "rw_ios_per_sec": 0, 00:09:53.363 "rw_mbytes_per_sec": 0, 00:09:53.363 "r_mbytes_per_sec": 0, 00:09:53.363 "w_mbytes_per_sec": 0 00:09:53.363 }, 00:09:53.363 "claimed": false, 00:09:53.363 "zoned": false, 00:09:53.363 "supported_io_types": { 00:09:53.363 "read": true, 00:09:53.363 "write": true, 00:09:53.363 "unmap": false, 00:09:53.363 "flush": false, 00:09:53.363 "reset": true, 00:09:53.363 "nvme_admin": false, 00:09:53.363 "nvme_io": false, 00:09:53.363 "nvme_io_md": false, 00:09:53.363 "write_zeroes": true, 00:09:53.363 "zcopy": false, 00:09:53.363 "get_zone_info": false, 00:09:53.363 "zone_management": false, 00:09:53.363 "zone_append": false, 00:09:53.363 "compare": false, 00:09:53.363 "compare_and_write": false, 00:09:53.363 "abort": false, 00:09:53.363 "seek_hole": false, 00:09:53.363 "seek_data": false, 00:09:53.363 "copy": false, 00:09:53.363 "nvme_iov_md": false 00:09:53.363 }, 00:09:53.363 "memory_domains": [ 00:09:53.363 { 00:09:53.363 "dma_device_id": "system", 00:09:53.363 "dma_device_type": 1 00:09:53.363 }, 00:09:53.363 { 00:09:53.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.363 "dma_device_type": 2 00:09:53.363 }, 00:09:53.363 { 00:09:53.363 "dma_device_id": "system", 00:09:53.363 "dma_device_type": 1 00:09:53.363 }, 00:09:53.363 { 00:09:53.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.363 "dma_device_type": 2 00:09:53.363 }, 00:09:53.363 { 00:09:53.363 "dma_device_id": "system", 00:09:53.363 "dma_device_type": 1 00:09:53.363 }, 00:09:53.363 { 00:09:53.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.363 "dma_device_type": 2 00:09:53.363 } 00:09:53.363 ], 00:09:53.363 "driver_specific": { 00:09:53.363 "raid": { 00:09:53.363 "uuid": "1fae4a99-3e41-4a7a-ac23-72d7f93e3476", 00:09:53.363 "strip_size_kb": 0, 00:09:53.363 "state": "online", 00:09:53.363 "raid_level": "raid1", 00:09:53.363 "superblock": true, 00:09:53.363 "num_base_bdevs": 3, 00:09:53.363 "num_base_bdevs_discovered": 3, 00:09:53.363 "num_base_bdevs_operational": 3, 00:09:53.363 "base_bdevs_list": [ 00:09:53.363 { 00:09:53.363 "name": "NewBaseBdev", 00:09:53.363 "uuid": "94e5766f-31c7-4872-8ebc-475007ef1763", 00:09:53.363 "is_configured": true, 00:09:53.363 "data_offset": 2048, 00:09:53.363 "data_size": 63488 00:09:53.363 }, 00:09:53.363 { 00:09:53.363 "name": "BaseBdev2", 00:09:53.363 "uuid": "fe5653a2-7e78-469d-b3c0-d296ee33024a", 00:09:53.363 "is_configured": true, 00:09:53.363 "data_offset": 2048, 00:09:53.363 "data_size": 63488 00:09:53.363 }, 00:09:53.363 { 00:09:53.363 "name": "BaseBdev3", 00:09:53.363 "uuid": "5539d37e-2423-40e7-8c95-12cd9eaab895", 00:09:53.363 "is_configured": true, 00:09:53.363 "data_offset": 2048, 00:09:53.363 "data_size": 63488 00:09:53.363 } 00:09:53.363 ] 00:09:53.363 } 00:09:53.363 } 00:09:53.363 }' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:53.363 BaseBdev2 00:09:53.363 BaseBdev3' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.363 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.623 [2024-09-29 21:41:12.348218] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.623 [2024-09-29 21:41:12.348260] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.623 [2024-09-29 21:41:12.348355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.623 [2024-09-29 21:41:12.348677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.623 [2024-09-29 21:41:12.348695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68096 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68096 ']' 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68096 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68096 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:53.623 killing process with pid 68096 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68096' 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68096 00:09:53.623 [2024-09-29 21:41:12.399963] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.623 21:41:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68096 00:09:53.883 [2024-09-29 21:41:12.714941] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.261 21:41:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:55.261 00:09:55.261 real 0m10.609s 00:09:55.261 user 0m16.395s 00:09:55.261 sys 0m2.065s 00:09:55.261 21:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.261 21:41:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.261 ************************************ 00:09:55.261 END TEST raid_state_function_test_sb 00:09:55.261 ************************************ 00:09:55.261 21:41:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:55.261 21:41:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:55.261 21:41:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.261 21:41:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.261 ************************************ 00:09:55.261 START TEST raid_superblock_test 00:09:55.261 ************************************ 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68716 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68716 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68716 ']' 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:55.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:55.261 21:41:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.261 [2024-09-29 21:41:14.228597] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:55.261 [2024-09-29 21:41:14.228721] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68716 ] 00:09:55.521 [2024-09-29 21:41:14.398355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.780 [2024-09-29 21:41:14.651824] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.039 [2024-09-29 21:41:14.882632] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.039 [2024-09-29 21:41:14.882676] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.299 malloc1 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.299 [2024-09-29 21:41:15.111876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:56.299 [2024-09-29 21:41:15.111950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.299 [2024-09-29 21:41:15.111975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:56.299 [2024-09-29 21:41:15.111988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.299 [2024-09-29 21:41:15.114420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.299 [2024-09-29 21:41:15.114458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:56.299 pt1 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.299 malloc2 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.299 [2024-09-29 21:41:15.181417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:56.299 [2024-09-29 21:41:15.181483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.299 [2024-09-29 21:41:15.181507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:56.299 [2024-09-29 21:41:15.181517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.299 [2024-09-29 21:41:15.183810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.299 [2024-09-29 21:41:15.183845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:56.299 pt2 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.299 malloc3 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.299 [2024-09-29 21:41:15.242708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:56.299 [2024-09-29 21:41:15.242759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.299 [2024-09-29 21:41:15.242781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:56.299 [2024-09-29 21:41:15.242790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.299 [2024-09-29 21:41:15.245147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.299 [2024-09-29 21:41:15.245181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:56.299 pt3 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.299 [2024-09-29 21:41:15.254754] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:56.299 [2024-09-29 21:41:15.256811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:56.299 [2024-09-29 21:41:15.256884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:56.299 [2024-09-29 21:41:15.257058] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:56.299 [2024-09-29 21:41:15.257078] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:56.299 [2024-09-29 21:41:15.257304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:56.299 [2024-09-29 21:41:15.257484] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:56.299 [2024-09-29 21:41:15.257500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:56.299 [2024-09-29 21:41:15.257652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.299 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.300 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.300 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.300 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.300 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.300 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.300 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.300 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.300 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.300 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.569 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.569 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.569 "name": "raid_bdev1", 00:09:56.569 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:56.569 "strip_size_kb": 0, 00:09:56.569 "state": "online", 00:09:56.569 "raid_level": "raid1", 00:09:56.569 "superblock": true, 00:09:56.569 "num_base_bdevs": 3, 00:09:56.569 "num_base_bdevs_discovered": 3, 00:09:56.569 "num_base_bdevs_operational": 3, 00:09:56.569 "base_bdevs_list": [ 00:09:56.569 { 00:09:56.569 "name": "pt1", 00:09:56.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.569 "is_configured": true, 00:09:56.569 "data_offset": 2048, 00:09:56.569 "data_size": 63488 00:09:56.569 }, 00:09:56.569 { 00:09:56.569 "name": "pt2", 00:09:56.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.569 "is_configured": true, 00:09:56.569 "data_offset": 2048, 00:09:56.569 "data_size": 63488 00:09:56.569 }, 00:09:56.569 { 00:09:56.569 "name": "pt3", 00:09:56.569 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.569 "is_configured": true, 00:09:56.569 "data_offset": 2048, 00:09:56.569 "data_size": 63488 00:09:56.569 } 00:09:56.569 ] 00:09:56.569 }' 00:09:56.569 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.569 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.851 [2024-09-29 21:41:15.722255] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.851 "name": "raid_bdev1", 00:09:56.851 "aliases": [ 00:09:56.851 "64876a51-8c92-4118-9dce-525be22460bb" 00:09:56.851 ], 00:09:56.851 "product_name": "Raid Volume", 00:09:56.851 "block_size": 512, 00:09:56.851 "num_blocks": 63488, 00:09:56.851 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:56.851 "assigned_rate_limits": { 00:09:56.851 "rw_ios_per_sec": 0, 00:09:56.851 "rw_mbytes_per_sec": 0, 00:09:56.851 "r_mbytes_per_sec": 0, 00:09:56.851 "w_mbytes_per_sec": 0 00:09:56.851 }, 00:09:56.851 "claimed": false, 00:09:56.851 "zoned": false, 00:09:56.851 "supported_io_types": { 00:09:56.851 "read": true, 00:09:56.851 "write": true, 00:09:56.851 "unmap": false, 00:09:56.851 "flush": false, 00:09:56.851 "reset": true, 00:09:56.851 "nvme_admin": false, 00:09:56.851 "nvme_io": false, 00:09:56.851 "nvme_io_md": false, 00:09:56.851 "write_zeroes": true, 00:09:56.851 "zcopy": false, 00:09:56.851 "get_zone_info": false, 00:09:56.851 "zone_management": false, 00:09:56.851 "zone_append": false, 00:09:56.851 "compare": false, 00:09:56.851 "compare_and_write": false, 00:09:56.851 "abort": false, 00:09:56.851 "seek_hole": false, 00:09:56.851 "seek_data": false, 00:09:56.851 "copy": false, 00:09:56.851 "nvme_iov_md": false 00:09:56.851 }, 00:09:56.851 "memory_domains": [ 00:09:56.851 { 00:09:56.851 "dma_device_id": "system", 00:09:56.851 "dma_device_type": 1 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.851 "dma_device_type": 2 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "dma_device_id": "system", 00:09:56.851 "dma_device_type": 1 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.851 "dma_device_type": 2 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "dma_device_id": "system", 00:09:56.851 "dma_device_type": 1 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.851 "dma_device_type": 2 00:09:56.851 } 00:09:56.851 ], 00:09:56.851 "driver_specific": { 00:09:56.851 "raid": { 00:09:56.851 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:56.851 "strip_size_kb": 0, 00:09:56.851 "state": "online", 00:09:56.851 "raid_level": "raid1", 00:09:56.851 "superblock": true, 00:09:56.851 "num_base_bdevs": 3, 00:09:56.851 "num_base_bdevs_discovered": 3, 00:09:56.851 "num_base_bdevs_operational": 3, 00:09:56.851 "base_bdevs_list": [ 00:09:56.851 { 00:09:56.851 "name": "pt1", 00:09:56.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:56.851 "is_configured": true, 00:09:56.851 "data_offset": 2048, 00:09:56.851 "data_size": 63488 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "name": "pt2", 00:09:56.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:56.851 "is_configured": true, 00:09:56.851 "data_offset": 2048, 00:09:56.851 "data_size": 63488 00:09:56.851 }, 00:09:56.851 { 00:09:56.851 "name": "pt3", 00:09:56.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:56.851 "is_configured": true, 00:09:56.851 "data_offset": 2048, 00:09:56.851 "data_size": 63488 00:09:56.851 } 00:09:56.851 ] 00:09:56.851 } 00:09:56.851 } 00:09:56.851 }' 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:56.851 pt2 00:09:56.851 pt3' 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.851 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.138 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.138 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.138 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.138 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:57.138 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.139 [2024-09-29 21:41:15.961743] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.139 21:41:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=64876a51-8c92-4118-9dce-525be22460bb 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 64876a51-8c92-4118-9dce-525be22460bb ']' 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.139 [2024-09-29 21:41:16.009414] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.139 [2024-09-29 21:41:16.009485] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.139 [2024-09-29 21:41:16.009595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.139 [2024-09-29 21:41:16.009718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.139 [2024-09-29 21:41:16.009767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.139 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.402 [2024-09-29 21:41:16.157207] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:57.402 [2024-09-29 21:41:16.159456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:57.402 [2024-09-29 21:41:16.159557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:57.402 [2024-09-29 21:41:16.159644] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:57.402 [2024-09-29 21:41:16.159737] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:57.402 [2024-09-29 21:41:16.159802] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:57.402 [2024-09-29 21:41:16.159857] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:57.402 [2024-09-29 21:41:16.159885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:57.402 request: 00:09:57.402 { 00:09:57.402 "name": "raid_bdev1", 00:09:57.402 "raid_level": "raid1", 00:09:57.402 "base_bdevs": [ 00:09:57.402 "malloc1", 00:09:57.402 "malloc2", 00:09:57.402 "malloc3" 00:09:57.402 ], 00:09:57.402 "superblock": false, 00:09:57.402 "method": "bdev_raid_create", 00:09:57.402 "req_id": 1 00:09:57.402 } 00:09:57.402 Got JSON-RPC error response 00:09:57.402 response: 00:09:57.402 { 00:09:57.402 "code": -17, 00:09:57.402 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:57.402 } 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:57.402 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 [2024-09-29 21:41:16.217085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:57.403 [2024-09-29 21:41:16.217140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.403 [2024-09-29 21:41:16.217166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:57.403 [2024-09-29 21:41:16.217175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.403 [2024-09-29 21:41:16.219648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.403 [2024-09-29 21:41:16.219684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:57.403 [2024-09-29 21:41:16.219757] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:57.403 [2024-09-29 21:41:16.219810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:57.403 pt1 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.403 "name": "raid_bdev1", 00:09:57.403 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:57.403 "strip_size_kb": 0, 00:09:57.403 "state": "configuring", 00:09:57.403 "raid_level": "raid1", 00:09:57.403 "superblock": true, 00:09:57.403 "num_base_bdevs": 3, 00:09:57.403 "num_base_bdevs_discovered": 1, 00:09:57.403 "num_base_bdevs_operational": 3, 00:09:57.403 "base_bdevs_list": [ 00:09:57.403 { 00:09:57.403 "name": "pt1", 00:09:57.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.403 "is_configured": true, 00:09:57.403 "data_offset": 2048, 00:09:57.403 "data_size": 63488 00:09:57.403 }, 00:09:57.403 { 00:09:57.403 "name": null, 00:09:57.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.403 "is_configured": false, 00:09:57.403 "data_offset": 2048, 00:09:57.403 "data_size": 63488 00:09:57.403 }, 00:09:57.403 { 00:09:57.403 "name": null, 00:09:57.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.403 "is_configured": false, 00:09:57.403 "data_offset": 2048, 00:09:57.403 "data_size": 63488 00:09:57.403 } 00:09:57.403 ] 00:09:57.403 }' 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.403 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.971 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:57.971 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:57.971 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.971 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.971 [2024-09-29 21:41:16.664310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:57.971 [2024-09-29 21:41:16.664419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.971 [2024-09-29 21:41:16.664465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:57.971 [2024-09-29 21:41:16.664494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.971 [2024-09-29 21:41:16.664961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.971 [2024-09-29 21:41:16.665019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:57.971 [2024-09-29 21:41:16.665140] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:57.971 [2024-09-29 21:41:16.665192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:57.971 pt2 00:09:57.971 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.971 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:57.971 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.971 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.971 [2024-09-29 21:41:16.676318] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:57.971 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.972 "name": "raid_bdev1", 00:09:57.972 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:57.972 "strip_size_kb": 0, 00:09:57.972 "state": "configuring", 00:09:57.972 "raid_level": "raid1", 00:09:57.972 "superblock": true, 00:09:57.972 "num_base_bdevs": 3, 00:09:57.972 "num_base_bdevs_discovered": 1, 00:09:57.972 "num_base_bdevs_operational": 3, 00:09:57.972 "base_bdevs_list": [ 00:09:57.972 { 00:09:57.972 "name": "pt1", 00:09:57.972 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:57.972 "is_configured": true, 00:09:57.972 "data_offset": 2048, 00:09:57.972 "data_size": 63488 00:09:57.972 }, 00:09:57.972 { 00:09:57.972 "name": null, 00:09:57.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:57.972 "is_configured": false, 00:09:57.972 "data_offset": 0, 00:09:57.972 "data_size": 63488 00:09:57.972 }, 00:09:57.972 { 00:09:57.972 "name": null, 00:09:57.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:57.972 "is_configured": false, 00:09:57.972 "data_offset": 2048, 00:09:57.972 "data_size": 63488 00:09:57.972 } 00:09:57.972 ] 00:09:57.972 }' 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.972 21:41:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.232 [2024-09-29 21:41:17.059665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:58.232 [2024-09-29 21:41:17.059736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.232 [2024-09-29 21:41:17.059755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:58.232 [2024-09-29 21:41:17.059767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.232 [2024-09-29 21:41:17.060298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.232 [2024-09-29 21:41:17.060322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:58.232 [2024-09-29 21:41:17.060418] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:58.232 [2024-09-29 21:41:17.060456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:58.232 pt2 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.232 [2024-09-29 21:41:17.067660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:58.232 [2024-09-29 21:41:17.067754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.232 [2024-09-29 21:41:17.067777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:58.232 [2024-09-29 21:41:17.067791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.232 [2024-09-29 21:41:17.068181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.232 [2024-09-29 21:41:17.068229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:58.232 [2024-09-29 21:41:17.068295] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:58.232 [2024-09-29 21:41:17.068318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:58.232 [2024-09-29 21:41:17.068438] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:58.232 [2024-09-29 21:41:17.068451] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:58.232 [2024-09-29 21:41:17.068722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:58.232 [2024-09-29 21:41:17.068896] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:58.232 [2024-09-29 21:41:17.068914] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:58.232 [2024-09-29 21:41:17.069088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.232 pt3 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.232 "name": "raid_bdev1", 00:09:58.232 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:58.232 "strip_size_kb": 0, 00:09:58.232 "state": "online", 00:09:58.232 "raid_level": "raid1", 00:09:58.232 "superblock": true, 00:09:58.232 "num_base_bdevs": 3, 00:09:58.232 "num_base_bdevs_discovered": 3, 00:09:58.232 "num_base_bdevs_operational": 3, 00:09:58.232 "base_bdevs_list": [ 00:09:58.232 { 00:09:58.232 "name": "pt1", 00:09:58.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.232 "is_configured": true, 00:09:58.232 "data_offset": 2048, 00:09:58.232 "data_size": 63488 00:09:58.232 }, 00:09:58.232 { 00:09:58.232 "name": "pt2", 00:09:58.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.232 "is_configured": true, 00:09:58.232 "data_offset": 2048, 00:09:58.232 "data_size": 63488 00:09:58.232 }, 00:09:58.232 { 00:09:58.232 "name": "pt3", 00:09:58.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.232 "is_configured": true, 00:09:58.232 "data_offset": 2048, 00:09:58.232 "data_size": 63488 00:09:58.232 } 00:09:58.232 ] 00:09:58.232 }' 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.232 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:58.801 [2024-09-29 21:41:17.551199] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.801 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:58.801 "name": "raid_bdev1", 00:09:58.801 "aliases": [ 00:09:58.801 "64876a51-8c92-4118-9dce-525be22460bb" 00:09:58.801 ], 00:09:58.801 "product_name": "Raid Volume", 00:09:58.801 "block_size": 512, 00:09:58.801 "num_blocks": 63488, 00:09:58.801 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:58.801 "assigned_rate_limits": { 00:09:58.801 "rw_ios_per_sec": 0, 00:09:58.801 "rw_mbytes_per_sec": 0, 00:09:58.801 "r_mbytes_per_sec": 0, 00:09:58.801 "w_mbytes_per_sec": 0 00:09:58.801 }, 00:09:58.801 "claimed": false, 00:09:58.801 "zoned": false, 00:09:58.801 "supported_io_types": { 00:09:58.801 "read": true, 00:09:58.801 "write": true, 00:09:58.801 "unmap": false, 00:09:58.801 "flush": false, 00:09:58.801 "reset": true, 00:09:58.801 "nvme_admin": false, 00:09:58.801 "nvme_io": false, 00:09:58.801 "nvme_io_md": false, 00:09:58.801 "write_zeroes": true, 00:09:58.801 "zcopy": false, 00:09:58.801 "get_zone_info": false, 00:09:58.801 "zone_management": false, 00:09:58.801 "zone_append": false, 00:09:58.801 "compare": false, 00:09:58.801 "compare_and_write": false, 00:09:58.801 "abort": false, 00:09:58.801 "seek_hole": false, 00:09:58.801 "seek_data": false, 00:09:58.801 "copy": false, 00:09:58.801 "nvme_iov_md": false 00:09:58.801 }, 00:09:58.801 "memory_domains": [ 00:09:58.801 { 00:09:58.801 "dma_device_id": "system", 00:09:58.801 "dma_device_type": 1 00:09:58.801 }, 00:09:58.801 { 00:09:58.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.801 "dma_device_type": 2 00:09:58.801 }, 00:09:58.801 { 00:09:58.801 "dma_device_id": "system", 00:09:58.801 "dma_device_type": 1 00:09:58.801 }, 00:09:58.801 { 00:09:58.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.801 "dma_device_type": 2 00:09:58.801 }, 00:09:58.801 { 00:09:58.801 "dma_device_id": "system", 00:09:58.801 "dma_device_type": 1 00:09:58.801 }, 00:09:58.801 { 00:09:58.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.802 "dma_device_type": 2 00:09:58.802 } 00:09:58.802 ], 00:09:58.802 "driver_specific": { 00:09:58.802 "raid": { 00:09:58.802 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:58.802 "strip_size_kb": 0, 00:09:58.802 "state": "online", 00:09:58.802 "raid_level": "raid1", 00:09:58.802 "superblock": true, 00:09:58.802 "num_base_bdevs": 3, 00:09:58.802 "num_base_bdevs_discovered": 3, 00:09:58.802 "num_base_bdevs_operational": 3, 00:09:58.802 "base_bdevs_list": [ 00:09:58.802 { 00:09:58.802 "name": "pt1", 00:09:58.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:58.802 "is_configured": true, 00:09:58.802 "data_offset": 2048, 00:09:58.802 "data_size": 63488 00:09:58.802 }, 00:09:58.802 { 00:09:58.802 "name": "pt2", 00:09:58.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:58.802 "is_configured": true, 00:09:58.802 "data_offset": 2048, 00:09:58.802 "data_size": 63488 00:09:58.802 }, 00:09:58.802 { 00:09:58.802 "name": "pt3", 00:09:58.802 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:58.802 "is_configured": true, 00:09:58.802 "data_offset": 2048, 00:09:58.802 "data_size": 63488 00:09:58.802 } 00:09:58.802 ] 00:09:58.802 } 00:09:58.802 } 00:09:58.802 }' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:58.802 pt2 00:09:58.802 pt3' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.802 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.062 [2024-09-29 21:41:17.814594] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 64876a51-8c92-4118-9dce-525be22460bb '!=' 64876a51-8c92-4118-9dce-525be22460bb ']' 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.062 [2024-09-29 21:41:17.866324] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.062 "name": "raid_bdev1", 00:09:59.062 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:59.062 "strip_size_kb": 0, 00:09:59.062 "state": "online", 00:09:59.062 "raid_level": "raid1", 00:09:59.062 "superblock": true, 00:09:59.062 "num_base_bdevs": 3, 00:09:59.062 "num_base_bdevs_discovered": 2, 00:09:59.062 "num_base_bdevs_operational": 2, 00:09:59.062 "base_bdevs_list": [ 00:09:59.062 { 00:09:59.062 "name": null, 00:09:59.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.062 "is_configured": false, 00:09:59.062 "data_offset": 0, 00:09:59.062 "data_size": 63488 00:09:59.062 }, 00:09:59.062 { 00:09:59.062 "name": "pt2", 00:09:59.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.062 "is_configured": true, 00:09:59.062 "data_offset": 2048, 00:09:59.062 "data_size": 63488 00:09:59.062 }, 00:09:59.062 { 00:09:59.062 "name": "pt3", 00:09:59.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.062 "is_configured": true, 00:09:59.062 "data_offset": 2048, 00:09:59.062 "data_size": 63488 00:09:59.062 } 00:09:59.062 ] 00:09:59.062 }' 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.062 21:41:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.631 [2024-09-29 21:41:18.341459] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.631 [2024-09-29 21:41:18.341542] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.631 [2024-09-29 21:41:18.341645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.631 [2024-09-29 21:41:18.341726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.631 [2024-09-29 21:41:18.341775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.631 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.631 [2024-09-29 21:41:18.417328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:59.631 [2024-09-29 21:41:18.417446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.631 [2024-09-29 21:41:18.417480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:59.631 [2024-09-29 21:41:18.417510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.631 [2024-09-29 21:41:18.420110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.631 [2024-09-29 21:41:18.420208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:59.631 [2024-09-29 21:41:18.420335] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:59.631 [2024-09-29 21:41:18.420410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:59.631 pt2 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.632 "name": "raid_bdev1", 00:09:59.632 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:09:59.632 "strip_size_kb": 0, 00:09:59.632 "state": "configuring", 00:09:59.632 "raid_level": "raid1", 00:09:59.632 "superblock": true, 00:09:59.632 "num_base_bdevs": 3, 00:09:59.632 "num_base_bdevs_discovered": 1, 00:09:59.632 "num_base_bdevs_operational": 2, 00:09:59.632 "base_bdevs_list": [ 00:09:59.632 { 00:09:59.632 "name": null, 00:09:59.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.632 "is_configured": false, 00:09:59.632 "data_offset": 2048, 00:09:59.632 "data_size": 63488 00:09:59.632 }, 00:09:59.632 { 00:09:59.632 "name": "pt2", 00:09:59.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:59.632 "is_configured": true, 00:09:59.632 "data_offset": 2048, 00:09:59.632 "data_size": 63488 00:09:59.632 }, 00:09:59.632 { 00:09:59.632 "name": null, 00:09:59.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:59.632 "is_configured": false, 00:09:59.632 "data_offset": 2048, 00:09:59.632 "data_size": 63488 00:09:59.632 } 00:09:59.632 ] 00:09:59.632 }' 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.632 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.891 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:59.891 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:59.891 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:59.891 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:59.891 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.891 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.891 [2024-09-29 21:41:18.860589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:59.891 [2024-09-29 21:41:18.860745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.891 [2024-09-29 21:41:18.860772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:59.891 [2024-09-29 21:41:18.860786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.891 [2024-09-29 21:41:18.861344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.891 [2024-09-29 21:41:18.861369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:59.891 [2024-09-29 21:41:18.861467] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:59.891 [2024-09-29 21:41:18.861503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:59.891 [2024-09-29 21:41:18.861640] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:59.891 [2024-09-29 21:41:18.861659] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:59.891 [2024-09-29 21:41:18.861926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:59.891 [2024-09-29 21:41:18.862115] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:59.892 [2024-09-29 21:41:18.862125] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:59.892 [2024-09-29 21:41:18.862314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.892 pt3 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.892 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.152 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.152 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.152 "name": "raid_bdev1", 00:10:00.152 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:10:00.152 "strip_size_kb": 0, 00:10:00.152 "state": "online", 00:10:00.152 "raid_level": "raid1", 00:10:00.152 "superblock": true, 00:10:00.152 "num_base_bdevs": 3, 00:10:00.152 "num_base_bdevs_discovered": 2, 00:10:00.152 "num_base_bdevs_operational": 2, 00:10:00.152 "base_bdevs_list": [ 00:10:00.152 { 00:10:00.152 "name": null, 00:10:00.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.152 "is_configured": false, 00:10:00.152 "data_offset": 2048, 00:10:00.152 "data_size": 63488 00:10:00.152 }, 00:10:00.152 { 00:10:00.152 "name": "pt2", 00:10:00.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.152 "is_configured": true, 00:10:00.152 "data_offset": 2048, 00:10:00.152 "data_size": 63488 00:10:00.152 }, 00:10:00.152 { 00:10:00.152 "name": "pt3", 00:10:00.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.152 "is_configured": true, 00:10:00.152 "data_offset": 2048, 00:10:00.152 "data_size": 63488 00:10:00.152 } 00:10:00.152 ] 00:10:00.152 }' 00:10:00.152 21:41:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.152 21:41:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.411 [2024-09-29 21:41:19.299791] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.411 [2024-09-29 21:41:19.299897] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:00.411 [2024-09-29 21:41:19.300030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:00.411 [2024-09-29 21:41:19.300162] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:00.411 [2024-09-29 21:41:19.300219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.411 [2024-09-29 21:41:19.363672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.411 [2024-09-29 21:41:19.363775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.411 [2024-09-29 21:41:19.363816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:00.411 [2024-09-29 21:41:19.363846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.411 [2024-09-29 21:41:19.366429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.411 [2024-09-29 21:41:19.366503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.411 [2024-09-29 21:41:19.366619] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:00.411 [2024-09-29 21:41:19.366693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:00.411 [2024-09-29 21:41:19.366860] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:00.411 [2024-09-29 21:41:19.366915] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:00.411 [2024-09-29 21:41:19.366956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:00.411 [2024-09-29 21:41:19.367070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:00.411 pt1 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.411 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.412 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.412 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.412 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.412 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.671 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.671 "name": "raid_bdev1", 00:10:00.671 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:10:00.671 "strip_size_kb": 0, 00:10:00.671 "state": "configuring", 00:10:00.671 "raid_level": "raid1", 00:10:00.671 "superblock": true, 00:10:00.671 "num_base_bdevs": 3, 00:10:00.671 "num_base_bdevs_discovered": 1, 00:10:00.671 "num_base_bdevs_operational": 2, 00:10:00.671 "base_bdevs_list": [ 00:10:00.671 { 00:10:00.671 "name": null, 00:10:00.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.671 "is_configured": false, 00:10:00.671 "data_offset": 2048, 00:10:00.671 "data_size": 63488 00:10:00.671 }, 00:10:00.671 { 00:10:00.671 "name": "pt2", 00:10:00.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.671 "is_configured": true, 00:10:00.671 "data_offset": 2048, 00:10:00.671 "data_size": 63488 00:10:00.671 }, 00:10:00.671 { 00:10:00.671 "name": null, 00:10:00.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.671 "is_configured": false, 00:10:00.671 "data_offset": 2048, 00:10:00.671 "data_size": 63488 00:10:00.671 } 00:10:00.671 ] 00:10:00.671 }' 00:10:00.671 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.671 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.931 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:00.931 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.931 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:00.931 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.931 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.931 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:00.931 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:00.931 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.931 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.931 [2024-09-29 21:41:19.846864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:00.931 [2024-09-29 21:41:19.847012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.931 [2024-09-29 21:41:19.847052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:00.931 [2024-09-29 21:41:19.847064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.931 [2024-09-29 21:41:19.847583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.931 [2024-09-29 21:41:19.847601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:00.931 [2024-09-29 21:41:19.847684] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:00.932 [2024-09-29 21:41:19.847734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:00.932 [2024-09-29 21:41:19.847867] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:00.932 [2024-09-29 21:41:19.847876] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:00.932 [2024-09-29 21:41:19.848195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:00.932 [2024-09-29 21:41:19.848373] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:00.932 [2024-09-29 21:41:19.848388] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:00.932 [2024-09-29 21:41:19.848532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.932 pt3 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.932 "name": "raid_bdev1", 00:10:00.932 "uuid": "64876a51-8c92-4118-9dce-525be22460bb", 00:10:00.932 "strip_size_kb": 0, 00:10:00.932 "state": "online", 00:10:00.932 "raid_level": "raid1", 00:10:00.932 "superblock": true, 00:10:00.932 "num_base_bdevs": 3, 00:10:00.932 "num_base_bdevs_discovered": 2, 00:10:00.932 "num_base_bdevs_operational": 2, 00:10:00.932 "base_bdevs_list": [ 00:10:00.932 { 00:10:00.932 "name": null, 00:10:00.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.932 "is_configured": false, 00:10:00.932 "data_offset": 2048, 00:10:00.932 "data_size": 63488 00:10:00.932 }, 00:10:00.932 { 00:10:00.932 "name": "pt2", 00:10:00.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:00.932 "is_configured": true, 00:10:00.932 "data_offset": 2048, 00:10:00.932 "data_size": 63488 00:10:00.932 }, 00:10:00.932 { 00:10:00.932 "name": "pt3", 00:10:00.932 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:00.932 "is_configured": true, 00:10:00.932 "data_offset": 2048, 00:10:00.932 "data_size": 63488 00:10:00.932 } 00:10:00.932 ] 00:10:00.932 }' 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.932 21:41:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.501 [2024-09-29 21:41:20.338307] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 64876a51-8c92-4118-9dce-525be22460bb '!=' 64876a51-8c92-4118-9dce-525be22460bb ']' 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68716 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68716 ']' 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68716 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68716 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68716' 00:10:01.501 killing process with pid 68716 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68716 00:10:01.501 [2024-09-29 21:41:20.411228] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.501 [2024-09-29 21:41:20.411409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.501 21:41:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68716 00:10:01.501 [2024-09-29 21:41:20.411510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.501 [2024-09-29 21:41:20.411526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:01.760 [2024-09-29 21:41:20.725483] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.140 21:41:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:03.140 00:10:03.140 real 0m7.914s 00:10:03.140 user 0m12.076s 00:10:03.140 sys 0m1.536s 00:10:03.140 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.140 21:41:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.140 ************************************ 00:10:03.140 END TEST raid_superblock_test 00:10:03.140 ************************************ 00:10:03.140 21:41:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:03.140 21:41:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:03.140 21:41:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.140 21:41:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.140 ************************************ 00:10:03.140 START TEST raid_read_error_test 00:10:03.140 ************************************ 00:10:03.140 21:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:10:03.140 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:03.140 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:03.140 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Ghe7db5Que 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69167 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69167 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69167 ']' 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.400 21:41:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.400 [2024-09-29 21:41:22.231050] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:03.400 [2024-09-29 21:41:22.231237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69167 ] 00:10:03.659 [2024-09-29 21:41:22.395226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.918 [2024-09-29 21:41:22.643504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.919 [2024-09-29 21:41:22.873157] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.919 [2024-09-29 21:41:22.873294] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.178 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.178 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:04.178 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.178 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:04.178 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.179 BaseBdev1_malloc 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.179 true 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.179 [2024-09-29 21:41:23.128521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.179 [2024-09-29 21:41:23.128592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.179 [2024-09-29 21:41:23.128610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:04.179 [2024-09-29 21:41:23.128623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.179 [2024-09-29 21:41:23.131082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.179 [2024-09-29 21:41:23.131119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.179 BaseBdev1 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.179 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.438 BaseBdev2_malloc 00:10:04.438 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.438 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:04.438 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.438 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.438 true 00:10:04.438 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.438 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:04.438 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.439 [2024-09-29 21:41:23.210664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:04.439 [2024-09-29 21:41:23.210722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.439 [2024-09-29 21:41:23.210738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:04.439 [2024-09-29 21:41:23.210750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.439 [2024-09-29 21:41:23.213110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.439 [2024-09-29 21:41:23.213151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:04.439 BaseBdev2 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.439 BaseBdev3_malloc 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.439 true 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.439 [2024-09-29 21:41:23.275209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:04.439 [2024-09-29 21:41:23.275330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.439 [2024-09-29 21:41:23.275351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:04.439 [2024-09-29 21:41:23.275363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.439 [2024-09-29 21:41:23.277727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.439 [2024-09-29 21:41:23.277766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:04.439 BaseBdev3 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.439 [2024-09-29 21:41:23.287265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.439 [2024-09-29 21:41:23.289327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.439 [2024-09-29 21:41:23.289402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.439 [2024-09-29 21:41:23.289609] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.439 [2024-09-29 21:41:23.289622] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:04.439 [2024-09-29 21:41:23.289867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:04.439 [2024-09-29 21:41:23.290027] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.439 [2024-09-29 21:41:23.290040] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:04.439 [2024-09-29 21:41:23.290192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.439 "name": "raid_bdev1", 00:10:04.439 "uuid": "8ce4af93-c966-4550-9083-07366ccaa8a2", 00:10:04.439 "strip_size_kb": 0, 00:10:04.439 "state": "online", 00:10:04.439 "raid_level": "raid1", 00:10:04.439 "superblock": true, 00:10:04.439 "num_base_bdevs": 3, 00:10:04.439 "num_base_bdevs_discovered": 3, 00:10:04.439 "num_base_bdevs_operational": 3, 00:10:04.439 "base_bdevs_list": [ 00:10:04.439 { 00:10:04.439 "name": "BaseBdev1", 00:10:04.439 "uuid": "154a2063-5f48-5929-90c1-618a9e55c23d", 00:10:04.439 "is_configured": true, 00:10:04.439 "data_offset": 2048, 00:10:04.439 "data_size": 63488 00:10:04.439 }, 00:10:04.439 { 00:10:04.439 "name": "BaseBdev2", 00:10:04.439 "uuid": "c93584dd-66b2-5737-ae20-aec4b766c265", 00:10:04.439 "is_configured": true, 00:10:04.439 "data_offset": 2048, 00:10:04.439 "data_size": 63488 00:10:04.439 }, 00:10:04.439 { 00:10:04.439 "name": "BaseBdev3", 00:10:04.439 "uuid": "c2ed35eb-1176-532e-a3e0-105008ba70cd", 00:10:04.439 "is_configured": true, 00:10:04.439 "data_offset": 2048, 00:10:04.439 "data_size": 63488 00:10:04.439 } 00:10:04.439 ] 00:10:04.439 }' 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.439 21:41:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.008 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:05.008 21:41:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:05.008 [2024-09-29 21:41:23.843664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.947 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.947 "name": "raid_bdev1", 00:10:05.947 "uuid": "8ce4af93-c966-4550-9083-07366ccaa8a2", 00:10:05.947 "strip_size_kb": 0, 00:10:05.947 "state": "online", 00:10:05.947 "raid_level": "raid1", 00:10:05.947 "superblock": true, 00:10:05.947 "num_base_bdevs": 3, 00:10:05.947 "num_base_bdevs_discovered": 3, 00:10:05.947 "num_base_bdevs_operational": 3, 00:10:05.947 "base_bdevs_list": [ 00:10:05.947 { 00:10:05.947 "name": "BaseBdev1", 00:10:05.947 "uuid": "154a2063-5f48-5929-90c1-618a9e55c23d", 00:10:05.947 "is_configured": true, 00:10:05.947 "data_offset": 2048, 00:10:05.947 "data_size": 63488 00:10:05.947 }, 00:10:05.947 { 00:10:05.947 "name": "BaseBdev2", 00:10:05.947 "uuid": "c93584dd-66b2-5737-ae20-aec4b766c265", 00:10:05.947 "is_configured": true, 00:10:05.947 "data_offset": 2048, 00:10:05.947 "data_size": 63488 00:10:05.947 }, 00:10:05.947 { 00:10:05.947 "name": "BaseBdev3", 00:10:05.947 "uuid": "c2ed35eb-1176-532e-a3e0-105008ba70cd", 00:10:05.947 "is_configured": true, 00:10:05.947 "data_offset": 2048, 00:10:05.947 "data_size": 63488 00:10:05.948 } 00:10:05.948 ] 00:10:05.948 }' 00:10:05.948 21:41:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.948 21:41:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.516 [2024-09-29 21:41:25.253290] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.516 [2024-09-29 21:41:25.253419] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.516 [2024-09-29 21:41:25.255986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.516 [2024-09-29 21:41:25.256104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.516 [2024-09-29 21:41:25.256260] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.516 [2024-09-29 21:41:25.256325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:06.516 { 00:10:06.516 "results": [ 00:10:06.516 { 00:10:06.516 "job": "raid_bdev1", 00:10:06.516 "core_mask": "0x1", 00:10:06.516 "workload": "randrw", 00:10:06.516 "percentage": 50, 00:10:06.516 "status": "finished", 00:10:06.516 "queue_depth": 1, 00:10:06.516 "io_size": 131072, 00:10:06.516 "runtime": 1.41042, 00:10:06.516 "iops": 10544.376852285135, 00:10:06.516 "mibps": 1318.047106535642, 00:10:06.516 "io_failed": 0, 00:10:06.516 "io_timeout": 0, 00:10:06.516 "avg_latency_us": 92.37871513773428, 00:10:06.516 "min_latency_us": 22.246288209606988, 00:10:06.516 "max_latency_us": 1438.071615720524 00:10:06.516 } 00:10:06.516 ], 00:10:06.516 "core_count": 1 00:10:06.516 } 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69167 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69167 ']' 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69167 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69167 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.516 killing process with pid 69167 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69167' 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69167 00:10:06.516 [2024-09-29 21:41:25.303255] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.516 21:41:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69167 00:10:06.776 [2024-09-29 21:41:25.550295] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Ghe7db5Que 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:08.156 ************************************ 00:10:08.156 END TEST raid_read_error_test 00:10:08.156 ************************************ 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:08.156 00:10:08.156 real 0m4.830s 00:10:08.156 user 0m5.586s 00:10:08.156 sys 0m0.695s 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.156 21:41:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.156 21:41:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:08.156 21:41:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:08.156 21:41:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.156 21:41:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.156 ************************************ 00:10:08.156 START TEST raid_write_error_test 00:10:08.156 ************************************ 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EuvFfQdsCX 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69314 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69314 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69314 ']' 00:10:08.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.156 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.156 [2024-09-29 21:41:27.139465] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:08.415 [2024-09-29 21:41:27.139672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69314 ] 00:10:08.415 [2024-09-29 21:41:27.309186] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.675 [2024-09-29 21:41:27.552702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.934 [2024-09-29 21:41:27.782768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.935 [2024-09-29 21:41:27.782803] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.194 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.194 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:09.194 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.194 21:41:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:09.194 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.194 21:41:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.194 BaseBdev1_malloc 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.194 true 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.194 [2024-09-29 21:41:28.031822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:09.194 [2024-09-29 21:41:28.031963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.194 [2024-09-29 21:41:28.031985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:09.194 [2024-09-29 21:41:28.031997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.194 [2024-09-29 21:41:28.034417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.194 [2024-09-29 21:41:28.034458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:09.194 BaseBdev1 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.194 BaseBdev2_malloc 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.194 true 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.194 [2024-09-29 21:41:28.132486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:09.194 [2024-09-29 21:41:28.132547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.194 [2024-09-29 21:41:28.132564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:09.194 [2024-09-29 21:41:28.132576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.194 [2024-09-29 21:41:28.134928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.194 [2024-09-29 21:41:28.134968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:09.194 BaseBdev2 00:10:09.194 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.195 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.195 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:09.195 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.195 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.454 BaseBdev3_malloc 00:10:09.454 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.454 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:09.454 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.454 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.454 true 00:10:09.454 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.454 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:09.454 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.454 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.454 [2024-09-29 21:41:28.206187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:09.455 [2024-09-29 21:41:28.206251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.455 [2024-09-29 21:41:28.206267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:09.455 [2024-09-29 21:41:28.206278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.455 [2024-09-29 21:41:28.208634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.455 [2024-09-29 21:41:28.208676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:09.455 BaseBdev3 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.455 [2024-09-29 21:41:28.218247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.455 [2024-09-29 21:41:28.220418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.455 [2024-09-29 21:41:28.220492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.455 [2024-09-29 21:41:28.220698] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:09.455 [2024-09-29 21:41:28.220710] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:09.455 [2024-09-29 21:41:28.220946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:09.455 [2024-09-29 21:41:28.221183] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:09.455 [2024-09-29 21:41:28.221200] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:09.455 [2024-09-29 21:41:28.221346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.455 "name": "raid_bdev1", 00:10:09.455 "uuid": "236f398f-9adb-428c-af40-10d98f1cbb80", 00:10:09.455 "strip_size_kb": 0, 00:10:09.455 "state": "online", 00:10:09.455 "raid_level": "raid1", 00:10:09.455 "superblock": true, 00:10:09.455 "num_base_bdevs": 3, 00:10:09.455 "num_base_bdevs_discovered": 3, 00:10:09.455 "num_base_bdevs_operational": 3, 00:10:09.455 "base_bdevs_list": [ 00:10:09.455 { 00:10:09.455 "name": "BaseBdev1", 00:10:09.455 "uuid": "667a5da8-f208-55f5-a7a6-10c79120b98f", 00:10:09.455 "is_configured": true, 00:10:09.455 "data_offset": 2048, 00:10:09.455 "data_size": 63488 00:10:09.455 }, 00:10:09.455 { 00:10:09.455 "name": "BaseBdev2", 00:10:09.455 "uuid": "38a4992c-05f5-5be7-8e3c-ad66d0e86a33", 00:10:09.455 "is_configured": true, 00:10:09.455 "data_offset": 2048, 00:10:09.455 "data_size": 63488 00:10:09.455 }, 00:10:09.455 { 00:10:09.455 "name": "BaseBdev3", 00:10:09.455 "uuid": "52a0e108-108e-56c5-9e40-5e83907f490b", 00:10:09.455 "is_configured": true, 00:10:09.455 "data_offset": 2048, 00:10:09.455 "data_size": 63488 00:10:09.455 } 00:10:09.455 ] 00:10:09.455 }' 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.455 21:41:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.719 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:09.719 21:41:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:09.993 [2024-09-29 21:41:28.714698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.952 [2024-09-29 21:41:29.651418] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:10.952 [2024-09-29 21:41:29.651611] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:10.952 [2024-09-29 21:41:29.651868] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.952 "name": "raid_bdev1", 00:10:10.952 "uuid": "236f398f-9adb-428c-af40-10d98f1cbb80", 00:10:10.952 "strip_size_kb": 0, 00:10:10.952 "state": "online", 00:10:10.952 "raid_level": "raid1", 00:10:10.952 "superblock": true, 00:10:10.952 "num_base_bdevs": 3, 00:10:10.952 "num_base_bdevs_discovered": 2, 00:10:10.952 "num_base_bdevs_operational": 2, 00:10:10.952 "base_bdevs_list": [ 00:10:10.952 { 00:10:10.952 "name": null, 00:10:10.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.952 "is_configured": false, 00:10:10.952 "data_offset": 0, 00:10:10.952 "data_size": 63488 00:10:10.952 }, 00:10:10.952 { 00:10:10.952 "name": "BaseBdev2", 00:10:10.952 "uuid": "38a4992c-05f5-5be7-8e3c-ad66d0e86a33", 00:10:10.952 "is_configured": true, 00:10:10.952 "data_offset": 2048, 00:10:10.952 "data_size": 63488 00:10:10.952 }, 00:10:10.952 { 00:10:10.952 "name": "BaseBdev3", 00:10:10.952 "uuid": "52a0e108-108e-56c5-9e40-5e83907f490b", 00:10:10.952 "is_configured": true, 00:10:10.952 "data_offset": 2048, 00:10:10.952 "data_size": 63488 00:10:10.952 } 00:10:10.952 ] 00:10:10.952 }' 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.952 21:41:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.212 [2024-09-29 21:41:30.112191] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.212 [2024-09-29 21:41:30.112262] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.212 [2024-09-29 21:41:30.114755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.212 [2024-09-29 21:41:30.114806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.212 [2024-09-29 21:41:30.114889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.212 [2024-09-29 21:41:30.114902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:11.212 { 00:10:11.212 "results": [ 00:10:11.212 { 00:10:11.212 "job": "raid_bdev1", 00:10:11.212 "core_mask": "0x1", 00:10:11.212 "workload": "randrw", 00:10:11.212 "percentage": 50, 00:10:11.212 "status": "finished", 00:10:11.212 "queue_depth": 1, 00:10:11.212 "io_size": 131072, 00:10:11.212 "runtime": 1.39804, 00:10:11.212 "iops": 11967.46874195302, 00:10:11.212 "mibps": 1495.9335927441275, 00:10:11.212 "io_failed": 0, 00:10:11.212 "io_timeout": 0, 00:10:11.212 "avg_latency_us": 81.10795581457322, 00:10:11.212 "min_latency_us": 22.022707423580787, 00:10:11.212 "max_latency_us": 1480.9991266375546 00:10:11.212 } 00:10:11.212 ], 00:10:11.212 "core_count": 1 00:10:11.212 } 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69314 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69314 ']' 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69314 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69314 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.212 killing process with pid 69314 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69314' 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69314 00:10:11.212 [2024-09-29 21:41:30.155096] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.212 21:41:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69314 00:10:11.472 [2024-09-29 21:41:30.397316] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EuvFfQdsCX 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:12.853 00:10:12.853 real 0m4.783s 00:10:12.853 user 0m5.431s 00:10:12.853 sys 0m0.718s 00:10:12.853 ************************************ 00:10:12.853 END TEST raid_write_error_test 00:10:12.853 ************************************ 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.853 21:41:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.113 21:41:31 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:13.113 21:41:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:13.113 21:41:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:13.113 21:41:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:13.113 21:41:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.113 21:41:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.113 ************************************ 00:10:13.113 START TEST raid_state_function_test 00:10:13.113 ************************************ 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69458 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69458' 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.113 Process raid pid: 69458 00:10:13.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69458 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69458 ']' 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.113 21:41:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.113 [2024-09-29 21:41:31.995837] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:13.113 [2024-09-29 21:41:31.995981] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.373 [2024-09-29 21:41:32.163350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.633 [2024-09-29 21:41:32.417185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.893 [2024-09-29 21:41:32.653287] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.893 [2024-09-29 21:41:32.653433] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.893 [2024-09-29 21:41:32.810416] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.893 [2024-09-29 21:41:32.810478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.893 [2024-09-29 21:41:32.810488] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.893 [2024-09-29 21:41:32.810497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.893 [2024-09-29 21:41:32.810503] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.893 [2024-09-29 21:41:32.810511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.893 [2024-09-29 21:41:32.810517] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.893 [2024-09-29 21:41:32.810526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.893 "name": "Existed_Raid", 00:10:13.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.893 "strip_size_kb": 64, 00:10:13.893 "state": "configuring", 00:10:13.893 "raid_level": "raid0", 00:10:13.893 "superblock": false, 00:10:13.893 "num_base_bdevs": 4, 00:10:13.893 "num_base_bdevs_discovered": 0, 00:10:13.893 "num_base_bdevs_operational": 4, 00:10:13.893 "base_bdevs_list": [ 00:10:13.893 { 00:10:13.893 "name": "BaseBdev1", 00:10:13.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.893 "is_configured": false, 00:10:13.893 "data_offset": 0, 00:10:13.893 "data_size": 0 00:10:13.893 }, 00:10:13.893 { 00:10:13.893 "name": "BaseBdev2", 00:10:13.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.893 "is_configured": false, 00:10:13.893 "data_offset": 0, 00:10:13.893 "data_size": 0 00:10:13.893 }, 00:10:13.893 { 00:10:13.893 "name": "BaseBdev3", 00:10:13.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.893 "is_configured": false, 00:10:13.893 "data_offset": 0, 00:10:13.893 "data_size": 0 00:10:13.893 }, 00:10:13.893 { 00:10:13.893 "name": "BaseBdev4", 00:10:13.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.893 "is_configured": false, 00:10:13.893 "data_offset": 0, 00:10:13.893 "data_size": 0 00:10:13.893 } 00:10:13.893 ] 00:10:13.893 }' 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.893 21:41:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.464 [2024-09-29 21:41:33.221624] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.464 [2024-09-29 21:41:33.221744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.464 [2024-09-29 21:41:33.233635] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.464 [2024-09-29 21:41:33.233716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.464 [2024-09-29 21:41:33.233744] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.464 [2024-09-29 21:41:33.233766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.464 [2024-09-29 21:41:33.233784] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.464 [2024-09-29 21:41:33.233804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.464 [2024-09-29 21:41:33.233821] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:14.464 [2024-09-29 21:41:33.233858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.464 [2024-09-29 21:41:33.299832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.464 BaseBdev1 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.464 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.464 [ 00:10:14.464 { 00:10:14.464 "name": "BaseBdev1", 00:10:14.464 "aliases": [ 00:10:14.464 "8006ca8f-5e0e-40b8-a025-4c1cb81aa89c" 00:10:14.464 ], 00:10:14.464 "product_name": "Malloc disk", 00:10:14.464 "block_size": 512, 00:10:14.464 "num_blocks": 65536, 00:10:14.464 "uuid": "8006ca8f-5e0e-40b8-a025-4c1cb81aa89c", 00:10:14.464 "assigned_rate_limits": { 00:10:14.464 "rw_ios_per_sec": 0, 00:10:14.464 "rw_mbytes_per_sec": 0, 00:10:14.464 "r_mbytes_per_sec": 0, 00:10:14.464 "w_mbytes_per_sec": 0 00:10:14.464 }, 00:10:14.464 "claimed": true, 00:10:14.464 "claim_type": "exclusive_write", 00:10:14.464 "zoned": false, 00:10:14.465 "supported_io_types": { 00:10:14.465 "read": true, 00:10:14.465 "write": true, 00:10:14.465 "unmap": true, 00:10:14.465 "flush": true, 00:10:14.465 "reset": true, 00:10:14.465 "nvme_admin": false, 00:10:14.465 "nvme_io": false, 00:10:14.465 "nvme_io_md": false, 00:10:14.465 "write_zeroes": true, 00:10:14.465 "zcopy": true, 00:10:14.465 "get_zone_info": false, 00:10:14.465 "zone_management": false, 00:10:14.465 "zone_append": false, 00:10:14.465 "compare": false, 00:10:14.465 "compare_and_write": false, 00:10:14.465 "abort": true, 00:10:14.465 "seek_hole": false, 00:10:14.465 "seek_data": false, 00:10:14.465 "copy": true, 00:10:14.465 "nvme_iov_md": false 00:10:14.465 }, 00:10:14.465 "memory_domains": [ 00:10:14.465 { 00:10:14.465 "dma_device_id": "system", 00:10:14.465 "dma_device_type": 1 00:10:14.465 }, 00:10:14.465 { 00:10:14.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.465 "dma_device_type": 2 00:10:14.465 } 00:10:14.465 ], 00:10:14.465 "driver_specific": {} 00:10:14.465 } 00:10:14.465 ] 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.465 "name": "Existed_Raid", 00:10:14.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.465 "strip_size_kb": 64, 00:10:14.465 "state": "configuring", 00:10:14.465 "raid_level": "raid0", 00:10:14.465 "superblock": false, 00:10:14.465 "num_base_bdevs": 4, 00:10:14.465 "num_base_bdevs_discovered": 1, 00:10:14.465 "num_base_bdevs_operational": 4, 00:10:14.465 "base_bdevs_list": [ 00:10:14.465 { 00:10:14.465 "name": "BaseBdev1", 00:10:14.465 "uuid": "8006ca8f-5e0e-40b8-a025-4c1cb81aa89c", 00:10:14.465 "is_configured": true, 00:10:14.465 "data_offset": 0, 00:10:14.465 "data_size": 65536 00:10:14.465 }, 00:10:14.465 { 00:10:14.465 "name": "BaseBdev2", 00:10:14.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.465 "is_configured": false, 00:10:14.465 "data_offset": 0, 00:10:14.465 "data_size": 0 00:10:14.465 }, 00:10:14.465 { 00:10:14.465 "name": "BaseBdev3", 00:10:14.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.465 "is_configured": false, 00:10:14.465 "data_offset": 0, 00:10:14.465 "data_size": 0 00:10:14.465 }, 00:10:14.465 { 00:10:14.465 "name": "BaseBdev4", 00:10:14.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.465 "is_configured": false, 00:10:14.465 "data_offset": 0, 00:10:14.465 "data_size": 0 00:10:14.465 } 00:10:14.465 ] 00:10:14.465 }' 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.465 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.035 [2024-09-29 21:41:33.779022] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.035 [2024-09-29 21:41:33.779076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.035 [2024-09-29 21:41:33.791083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.035 [2024-09-29 21:41:33.793210] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.035 [2024-09-29 21:41:33.793257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.035 [2024-09-29 21:41:33.793267] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:15.035 [2024-09-29 21:41:33.793279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:15.035 [2024-09-29 21:41:33.793286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:15.035 [2024-09-29 21:41:33.793295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.035 "name": "Existed_Raid", 00:10:15.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.035 "strip_size_kb": 64, 00:10:15.035 "state": "configuring", 00:10:15.035 "raid_level": "raid0", 00:10:15.035 "superblock": false, 00:10:15.035 "num_base_bdevs": 4, 00:10:15.035 "num_base_bdevs_discovered": 1, 00:10:15.035 "num_base_bdevs_operational": 4, 00:10:15.035 "base_bdevs_list": [ 00:10:15.035 { 00:10:15.035 "name": "BaseBdev1", 00:10:15.035 "uuid": "8006ca8f-5e0e-40b8-a025-4c1cb81aa89c", 00:10:15.035 "is_configured": true, 00:10:15.035 "data_offset": 0, 00:10:15.035 "data_size": 65536 00:10:15.035 }, 00:10:15.035 { 00:10:15.035 "name": "BaseBdev2", 00:10:15.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.035 "is_configured": false, 00:10:15.035 "data_offset": 0, 00:10:15.035 "data_size": 0 00:10:15.035 }, 00:10:15.035 { 00:10:15.035 "name": "BaseBdev3", 00:10:15.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.035 "is_configured": false, 00:10:15.035 "data_offset": 0, 00:10:15.035 "data_size": 0 00:10:15.035 }, 00:10:15.035 { 00:10:15.035 "name": "BaseBdev4", 00:10:15.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.035 "is_configured": false, 00:10:15.035 "data_offset": 0, 00:10:15.035 "data_size": 0 00:10:15.035 } 00:10:15.035 ] 00:10:15.035 }' 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.035 21:41:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.295 [2024-09-29 21:41:34.267391] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.295 BaseBdev2 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.295 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.554 [ 00:10:15.554 { 00:10:15.554 "name": "BaseBdev2", 00:10:15.554 "aliases": [ 00:10:15.554 "c7c31c91-73a9-4d1b-8535-fe309992e2ca" 00:10:15.554 ], 00:10:15.554 "product_name": "Malloc disk", 00:10:15.554 "block_size": 512, 00:10:15.554 "num_blocks": 65536, 00:10:15.554 "uuid": "c7c31c91-73a9-4d1b-8535-fe309992e2ca", 00:10:15.554 "assigned_rate_limits": { 00:10:15.554 "rw_ios_per_sec": 0, 00:10:15.554 "rw_mbytes_per_sec": 0, 00:10:15.554 "r_mbytes_per_sec": 0, 00:10:15.554 "w_mbytes_per_sec": 0 00:10:15.554 }, 00:10:15.554 "claimed": true, 00:10:15.554 "claim_type": "exclusive_write", 00:10:15.554 "zoned": false, 00:10:15.554 "supported_io_types": { 00:10:15.554 "read": true, 00:10:15.554 "write": true, 00:10:15.554 "unmap": true, 00:10:15.554 "flush": true, 00:10:15.554 "reset": true, 00:10:15.554 "nvme_admin": false, 00:10:15.554 "nvme_io": false, 00:10:15.554 "nvme_io_md": false, 00:10:15.554 "write_zeroes": true, 00:10:15.554 "zcopy": true, 00:10:15.554 "get_zone_info": false, 00:10:15.554 "zone_management": false, 00:10:15.554 "zone_append": false, 00:10:15.554 "compare": false, 00:10:15.554 "compare_and_write": false, 00:10:15.554 "abort": true, 00:10:15.554 "seek_hole": false, 00:10:15.554 "seek_data": false, 00:10:15.554 "copy": true, 00:10:15.554 "nvme_iov_md": false 00:10:15.554 }, 00:10:15.554 "memory_domains": [ 00:10:15.554 { 00:10:15.554 "dma_device_id": "system", 00:10:15.554 "dma_device_type": 1 00:10:15.554 }, 00:10:15.554 { 00:10:15.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.554 "dma_device_type": 2 00:10:15.554 } 00:10:15.554 ], 00:10:15.554 "driver_specific": {} 00:10:15.554 } 00:10:15.554 ] 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.554 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.555 "name": "Existed_Raid", 00:10:15.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.555 "strip_size_kb": 64, 00:10:15.555 "state": "configuring", 00:10:15.555 "raid_level": "raid0", 00:10:15.555 "superblock": false, 00:10:15.555 "num_base_bdevs": 4, 00:10:15.555 "num_base_bdevs_discovered": 2, 00:10:15.555 "num_base_bdevs_operational": 4, 00:10:15.555 "base_bdevs_list": [ 00:10:15.555 { 00:10:15.555 "name": "BaseBdev1", 00:10:15.555 "uuid": "8006ca8f-5e0e-40b8-a025-4c1cb81aa89c", 00:10:15.555 "is_configured": true, 00:10:15.555 "data_offset": 0, 00:10:15.555 "data_size": 65536 00:10:15.555 }, 00:10:15.555 { 00:10:15.555 "name": "BaseBdev2", 00:10:15.555 "uuid": "c7c31c91-73a9-4d1b-8535-fe309992e2ca", 00:10:15.555 "is_configured": true, 00:10:15.555 "data_offset": 0, 00:10:15.555 "data_size": 65536 00:10:15.555 }, 00:10:15.555 { 00:10:15.555 "name": "BaseBdev3", 00:10:15.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.555 "is_configured": false, 00:10:15.555 "data_offset": 0, 00:10:15.555 "data_size": 0 00:10:15.555 }, 00:10:15.555 { 00:10:15.555 "name": "BaseBdev4", 00:10:15.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.555 "is_configured": false, 00:10:15.555 "data_offset": 0, 00:10:15.555 "data_size": 0 00:10:15.555 } 00:10:15.555 ] 00:10:15.555 }' 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.555 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.815 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.815 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.815 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.074 [2024-09-29 21:41:34.823772] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.074 BaseBdev3 00:10:16.074 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.075 [ 00:10:16.075 { 00:10:16.075 "name": "BaseBdev3", 00:10:16.075 "aliases": [ 00:10:16.075 "ca75d605-55ff-4824-88ea-c3f1db6df5e8" 00:10:16.075 ], 00:10:16.075 "product_name": "Malloc disk", 00:10:16.075 "block_size": 512, 00:10:16.075 "num_blocks": 65536, 00:10:16.075 "uuid": "ca75d605-55ff-4824-88ea-c3f1db6df5e8", 00:10:16.075 "assigned_rate_limits": { 00:10:16.075 "rw_ios_per_sec": 0, 00:10:16.075 "rw_mbytes_per_sec": 0, 00:10:16.075 "r_mbytes_per_sec": 0, 00:10:16.075 "w_mbytes_per_sec": 0 00:10:16.075 }, 00:10:16.075 "claimed": true, 00:10:16.075 "claim_type": "exclusive_write", 00:10:16.075 "zoned": false, 00:10:16.075 "supported_io_types": { 00:10:16.075 "read": true, 00:10:16.075 "write": true, 00:10:16.075 "unmap": true, 00:10:16.075 "flush": true, 00:10:16.075 "reset": true, 00:10:16.075 "nvme_admin": false, 00:10:16.075 "nvme_io": false, 00:10:16.075 "nvme_io_md": false, 00:10:16.075 "write_zeroes": true, 00:10:16.075 "zcopy": true, 00:10:16.075 "get_zone_info": false, 00:10:16.075 "zone_management": false, 00:10:16.075 "zone_append": false, 00:10:16.075 "compare": false, 00:10:16.075 "compare_and_write": false, 00:10:16.075 "abort": true, 00:10:16.075 "seek_hole": false, 00:10:16.075 "seek_data": false, 00:10:16.075 "copy": true, 00:10:16.075 "nvme_iov_md": false 00:10:16.075 }, 00:10:16.075 "memory_domains": [ 00:10:16.075 { 00:10:16.075 "dma_device_id": "system", 00:10:16.075 "dma_device_type": 1 00:10:16.075 }, 00:10:16.075 { 00:10:16.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.075 "dma_device_type": 2 00:10:16.075 } 00:10:16.075 ], 00:10:16.075 "driver_specific": {} 00:10:16.075 } 00:10:16.075 ] 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.075 "name": "Existed_Raid", 00:10:16.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.075 "strip_size_kb": 64, 00:10:16.075 "state": "configuring", 00:10:16.075 "raid_level": "raid0", 00:10:16.075 "superblock": false, 00:10:16.075 "num_base_bdevs": 4, 00:10:16.075 "num_base_bdevs_discovered": 3, 00:10:16.075 "num_base_bdevs_operational": 4, 00:10:16.075 "base_bdevs_list": [ 00:10:16.075 { 00:10:16.075 "name": "BaseBdev1", 00:10:16.075 "uuid": "8006ca8f-5e0e-40b8-a025-4c1cb81aa89c", 00:10:16.075 "is_configured": true, 00:10:16.075 "data_offset": 0, 00:10:16.075 "data_size": 65536 00:10:16.075 }, 00:10:16.075 { 00:10:16.075 "name": "BaseBdev2", 00:10:16.075 "uuid": "c7c31c91-73a9-4d1b-8535-fe309992e2ca", 00:10:16.075 "is_configured": true, 00:10:16.075 "data_offset": 0, 00:10:16.075 "data_size": 65536 00:10:16.075 }, 00:10:16.075 { 00:10:16.075 "name": "BaseBdev3", 00:10:16.075 "uuid": "ca75d605-55ff-4824-88ea-c3f1db6df5e8", 00:10:16.075 "is_configured": true, 00:10:16.075 "data_offset": 0, 00:10:16.075 "data_size": 65536 00:10:16.075 }, 00:10:16.075 { 00:10:16.075 "name": "BaseBdev4", 00:10:16.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.075 "is_configured": false, 00:10:16.075 "data_offset": 0, 00:10:16.075 "data_size": 0 00:10:16.075 } 00:10:16.075 ] 00:10:16.075 }' 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.075 21:41:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.335 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:16.335 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.335 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.595 [2024-09-29 21:41:35.335169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:16.595 [2024-09-29 21:41:35.335292] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:16.595 [2024-09-29 21:41:35.335319] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:16.595 [2024-09-29 21:41:35.335678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:16.595 [2024-09-29 21:41:35.335906] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:16.595 [2024-09-29 21:41:35.335956] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:16.595 [2024-09-29 21:41:35.336321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.595 BaseBdev4 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.595 [ 00:10:16.595 { 00:10:16.595 "name": "BaseBdev4", 00:10:16.595 "aliases": [ 00:10:16.595 "2e955c04-c69c-494c-b310-5b2ef7713c71" 00:10:16.595 ], 00:10:16.595 "product_name": "Malloc disk", 00:10:16.595 "block_size": 512, 00:10:16.595 "num_blocks": 65536, 00:10:16.595 "uuid": "2e955c04-c69c-494c-b310-5b2ef7713c71", 00:10:16.595 "assigned_rate_limits": { 00:10:16.595 "rw_ios_per_sec": 0, 00:10:16.595 "rw_mbytes_per_sec": 0, 00:10:16.595 "r_mbytes_per_sec": 0, 00:10:16.595 "w_mbytes_per_sec": 0 00:10:16.595 }, 00:10:16.595 "claimed": true, 00:10:16.595 "claim_type": "exclusive_write", 00:10:16.595 "zoned": false, 00:10:16.595 "supported_io_types": { 00:10:16.595 "read": true, 00:10:16.595 "write": true, 00:10:16.595 "unmap": true, 00:10:16.595 "flush": true, 00:10:16.595 "reset": true, 00:10:16.595 "nvme_admin": false, 00:10:16.595 "nvme_io": false, 00:10:16.595 "nvme_io_md": false, 00:10:16.595 "write_zeroes": true, 00:10:16.595 "zcopy": true, 00:10:16.595 "get_zone_info": false, 00:10:16.595 "zone_management": false, 00:10:16.595 "zone_append": false, 00:10:16.595 "compare": false, 00:10:16.595 "compare_and_write": false, 00:10:16.595 "abort": true, 00:10:16.595 "seek_hole": false, 00:10:16.595 "seek_data": false, 00:10:16.595 "copy": true, 00:10:16.595 "nvme_iov_md": false 00:10:16.595 }, 00:10:16.595 "memory_domains": [ 00:10:16.595 { 00:10:16.595 "dma_device_id": "system", 00:10:16.595 "dma_device_type": 1 00:10:16.595 }, 00:10:16.595 { 00:10:16.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.595 "dma_device_type": 2 00:10:16.595 } 00:10:16.595 ], 00:10:16.595 "driver_specific": {} 00:10:16.595 } 00:10:16.595 ] 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.595 "name": "Existed_Raid", 00:10:16.595 "uuid": "8b2a4ba9-ad67-403e-8931-be9622de7ac8", 00:10:16.595 "strip_size_kb": 64, 00:10:16.595 "state": "online", 00:10:16.595 "raid_level": "raid0", 00:10:16.595 "superblock": false, 00:10:16.595 "num_base_bdevs": 4, 00:10:16.595 "num_base_bdevs_discovered": 4, 00:10:16.595 "num_base_bdevs_operational": 4, 00:10:16.595 "base_bdevs_list": [ 00:10:16.595 { 00:10:16.595 "name": "BaseBdev1", 00:10:16.595 "uuid": "8006ca8f-5e0e-40b8-a025-4c1cb81aa89c", 00:10:16.595 "is_configured": true, 00:10:16.595 "data_offset": 0, 00:10:16.595 "data_size": 65536 00:10:16.595 }, 00:10:16.595 { 00:10:16.595 "name": "BaseBdev2", 00:10:16.595 "uuid": "c7c31c91-73a9-4d1b-8535-fe309992e2ca", 00:10:16.595 "is_configured": true, 00:10:16.595 "data_offset": 0, 00:10:16.595 "data_size": 65536 00:10:16.595 }, 00:10:16.595 { 00:10:16.595 "name": "BaseBdev3", 00:10:16.595 "uuid": "ca75d605-55ff-4824-88ea-c3f1db6df5e8", 00:10:16.595 "is_configured": true, 00:10:16.595 "data_offset": 0, 00:10:16.595 "data_size": 65536 00:10:16.595 }, 00:10:16.595 { 00:10:16.595 "name": "BaseBdev4", 00:10:16.595 "uuid": "2e955c04-c69c-494c-b310-5b2ef7713c71", 00:10:16.595 "is_configured": true, 00:10:16.595 "data_offset": 0, 00:10:16.595 "data_size": 65536 00:10:16.595 } 00:10:16.595 ] 00:10:16.595 }' 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.595 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.855 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.856 [2024-09-29 21:41:35.782819] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.856 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.856 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.856 "name": "Existed_Raid", 00:10:16.856 "aliases": [ 00:10:16.856 "8b2a4ba9-ad67-403e-8931-be9622de7ac8" 00:10:16.856 ], 00:10:16.856 "product_name": "Raid Volume", 00:10:16.856 "block_size": 512, 00:10:16.856 "num_blocks": 262144, 00:10:16.856 "uuid": "8b2a4ba9-ad67-403e-8931-be9622de7ac8", 00:10:16.856 "assigned_rate_limits": { 00:10:16.856 "rw_ios_per_sec": 0, 00:10:16.856 "rw_mbytes_per_sec": 0, 00:10:16.856 "r_mbytes_per_sec": 0, 00:10:16.856 "w_mbytes_per_sec": 0 00:10:16.856 }, 00:10:16.856 "claimed": false, 00:10:16.856 "zoned": false, 00:10:16.856 "supported_io_types": { 00:10:16.856 "read": true, 00:10:16.856 "write": true, 00:10:16.856 "unmap": true, 00:10:16.856 "flush": true, 00:10:16.856 "reset": true, 00:10:16.856 "nvme_admin": false, 00:10:16.856 "nvme_io": false, 00:10:16.856 "nvme_io_md": false, 00:10:16.856 "write_zeroes": true, 00:10:16.856 "zcopy": false, 00:10:16.856 "get_zone_info": false, 00:10:16.856 "zone_management": false, 00:10:16.856 "zone_append": false, 00:10:16.856 "compare": false, 00:10:16.856 "compare_and_write": false, 00:10:16.856 "abort": false, 00:10:16.856 "seek_hole": false, 00:10:16.856 "seek_data": false, 00:10:16.856 "copy": false, 00:10:16.856 "nvme_iov_md": false 00:10:16.856 }, 00:10:16.856 "memory_domains": [ 00:10:16.856 { 00:10:16.856 "dma_device_id": "system", 00:10:16.856 "dma_device_type": 1 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.856 "dma_device_type": 2 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "dma_device_id": "system", 00:10:16.856 "dma_device_type": 1 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.856 "dma_device_type": 2 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "dma_device_id": "system", 00:10:16.856 "dma_device_type": 1 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.856 "dma_device_type": 2 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "dma_device_id": "system", 00:10:16.856 "dma_device_type": 1 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.856 "dma_device_type": 2 00:10:16.856 } 00:10:16.856 ], 00:10:16.856 "driver_specific": { 00:10:16.856 "raid": { 00:10:16.856 "uuid": "8b2a4ba9-ad67-403e-8931-be9622de7ac8", 00:10:16.856 "strip_size_kb": 64, 00:10:16.856 "state": "online", 00:10:16.856 "raid_level": "raid0", 00:10:16.856 "superblock": false, 00:10:16.856 "num_base_bdevs": 4, 00:10:16.856 "num_base_bdevs_discovered": 4, 00:10:16.856 "num_base_bdevs_operational": 4, 00:10:16.856 "base_bdevs_list": [ 00:10:16.856 { 00:10:16.856 "name": "BaseBdev1", 00:10:16.856 "uuid": "8006ca8f-5e0e-40b8-a025-4c1cb81aa89c", 00:10:16.856 "is_configured": true, 00:10:16.856 "data_offset": 0, 00:10:16.856 "data_size": 65536 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "name": "BaseBdev2", 00:10:16.856 "uuid": "c7c31c91-73a9-4d1b-8535-fe309992e2ca", 00:10:16.856 "is_configured": true, 00:10:16.856 "data_offset": 0, 00:10:16.856 "data_size": 65536 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "name": "BaseBdev3", 00:10:16.856 "uuid": "ca75d605-55ff-4824-88ea-c3f1db6df5e8", 00:10:16.856 "is_configured": true, 00:10:16.856 "data_offset": 0, 00:10:16.856 "data_size": 65536 00:10:16.856 }, 00:10:16.856 { 00:10:16.856 "name": "BaseBdev4", 00:10:16.856 "uuid": "2e955c04-c69c-494c-b310-5b2ef7713c71", 00:10:16.856 "is_configured": true, 00:10:16.856 "data_offset": 0, 00:10:16.856 "data_size": 65536 00:10:16.856 } 00:10:16.856 ] 00:10:16.856 } 00:10:16.856 } 00:10:16.856 }' 00:10:16.856 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:17.116 BaseBdev2 00:10:17.116 BaseBdev3 00:10:17.116 BaseBdev4' 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.116 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.117 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.117 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.117 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.117 21:41:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.117 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.117 21:41:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.117 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.117 [2024-09-29 21:41:36.066030] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:17.117 [2024-09-29 21:41:36.066075] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.117 [2024-09-29 21:41:36.066129] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.375 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.376 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.376 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.376 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.376 "name": "Existed_Raid", 00:10:17.376 "uuid": "8b2a4ba9-ad67-403e-8931-be9622de7ac8", 00:10:17.376 "strip_size_kb": 64, 00:10:17.376 "state": "offline", 00:10:17.376 "raid_level": "raid0", 00:10:17.376 "superblock": false, 00:10:17.376 "num_base_bdevs": 4, 00:10:17.376 "num_base_bdevs_discovered": 3, 00:10:17.376 "num_base_bdevs_operational": 3, 00:10:17.376 "base_bdevs_list": [ 00:10:17.376 { 00:10:17.376 "name": null, 00:10:17.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.376 "is_configured": false, 00:10:17.376 "data_offset": 0, 00:10:17.376 "data_size": 65536 00:10:17.376 }, 00:10:17.376 { 00:10:17.376 "name": "BaseBdev2", 00:10:17.376 "uuid": "c7c31c91-73a9-4d1b-8535-fe309992e2ca", 00:10:17.376 "is_configured": true, 00:10:17.376 "data_offset": 0, 00:10:17.376 "data_size": 65536 00:10:17.376 }, 00:10:17.376 { 00:10:17.376 "name": "BaseBdev3", 00:10:17.376 "uuid": "ca75d605-55ff-4824-88ea-c3f1db6df5e8", 00:10:17.376 "is_configured": true, 00:10:17.376 "data_offset": 0, 00:10:17.376 "data_size": 65536 00:10:17.376 }, 00:10:17.376 { 00:10:17.376 "name": "BaseBdev4", 00:10:17.376 "uuid": "2e955c04-c69c-494c-b310-5b2ef7713c71", 00:10:17.376 "is_configured": true, 00:10:17.376 "data_offset": 0, 00:10:17.376 "data_size": 65536 00:10:17.376 } 00:10:17.376 ] 00:10:17.376 }' 00:10:17.376 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.376 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.634 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:17.634 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.894 [2024-09-29 21:41:36.662403] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.894 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.894 [2024-09-29 21:41:36.824017] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:18.153 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.154 21:41:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.154 [2024-09-29 21:41:36.985884] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:18.154 [2024-09-29 21:41:36.986012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:18.154 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.154 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:18.154 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:18.154 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.154 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:18.154 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.154 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.154 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.413 BaseBdev2 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.413 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.414 [ 00:10:18.414 { 00:10:18.414 "name": "BaseBdev2", 00:10:18.414 "aliases": [ 00:10:18.414 "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff" 00:10:18.414 ], 00:10:18.414 "product_name": "Malloc disk", 00:10:18.414 "block_size": 512, 00:10:18.414 "num_blocks": 65536, 00:10:18.414 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:18.414 "assigned_rate_limits": { 00:10:18.414 "rw_ios_per_sec": 0, 00:10:18.414 "rw_mbytes_per_sec": 0, 00:10:18.414 "r_mbytes_per_sec": 0, 00:10:18.414 "w_mbytes_per_sec": 0 00:10:18.414 }, 00:10:18.414 "claimed": false, 00:10:18.414 "zoned": false, 00:10:18.414 "supported_io_types": { 00:10:18.414 "read": true, 00:10:18.414 "write": true, 00:10:18.414 "unmap": true, 00:10:18.414 "flush": true, 00:10:18.414 "reset": true, 00:10:18.414 "nvme_admin": false, 00:10:18.414 "nvme_io": false, 00:10:18.414 "nvme_io_md": false, 00:10:18.414 "write_zeroes": true, 00:10:18.414 "zcopy": true, 00:10:18.414 "get_zone_info": false, 00:10:18.414 "zone_management": false, 00:10:18.414 "zone_append": false, 00:10:18.414 "compare": false, 00:10:18.414 "compare_and_write": false, 00:10:18.414 "abort": true, 00:10:18.414 "seek_hole": false, 00:10:18.414 "seek_data": false, 00:10:18.414 "copy": true, 00:10:18.414 "nvme_iov_md": false 00:10:18.414 }, 00:10:18.414 "memory_domains": [ 00:10:18.414 { 00:10:18.414 "dma_device_id": "system", 00:10:18.414 "dma_device_type": 1 00:10:18.414 }, 00:10:18.414 { 00:10:18.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.414 "dma_device_type": 2 00:10:18.414 } 00:10:18.414 ], 00:10:18.414 "driver_specific": {} 00:10:18.414 } 00:10:18.414 ] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.414 BaseBdev3 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.414 [ 00:10:18.414 { 00:10:18.414 "name": "BaseBdev3", 00:10:18.414 "aliases": [ 00:10:18.414 "bb3718a3-e4ca-4485-ac64-e891e9658ab8" 00:10:18.414 ], 00:10:18.414 "product_name": "Malloc disk", 00:10:18.414 "block_size": 512, 00:10:18.414 "num_blocks": 65536, 00:10:18.414 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:18.414 "assigned_rate_limits": { 00:10:18.414 "rw_ios_per_sec": 0, 00:10:18.414 "rw_mbytes_per_sec": 0, 00:10:18.414 "r_mbytes_per_sec": 0, 00:10:18.414 "w_mbytes_per_sec": 0 00:10:18.414 }, 00:10:18.414 "claimed": false, 00:10:18.414 "zoned": false, 00:10:18.414 "supported_io_types": { 00:10:18.414 "read": true, 00:10:18.414 "write": true, 00:10:18.414 "unmap": true, 00:10:18.414 "flush": true, 00:10:18.414 "reset": true, 00:10:18.414 "nvme_admin": false, 00:10:18.414 "nvme_io": false, 00:10:18.414 "nvme_io_md": false, 00:10:18.414 "write_zeroes": true, 00:10:18.414 "zcopy": true, 00:10:18.414 "get_zone_info": false, 00:10:18.414 "zone_management": false, 00:10:18.414 "zone_append": false, 00:10:18.414 "compare": false, 00:10:18.414 "compare_and_write": false, 00:10:18.414 "abort": true, 00:10:18.414 "seek_hole": false, 00:10:18.414 "seek_data": false, 00:10:18.414 "copy": true, 00:10:18.414 "nvme_iov_md": false 00:10:18.414 }, 00:10:18.414 "memory_domains": [ 00:10:18.414 { 00:10:18.414 "dma_device_id": "system", 00:10:18.414 "dma_device_type": 1 00:10:18.414 }, 00:10:18.414 { 00:10:18.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.414 "dma_device_type": 2 00:10:18.414 } 00:10:18.414 ], 00:10:18.414 "driver_specific": {} 00:10:18.414 } 00:10:18.414 ] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.414 BaseBdev4 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.414 [ 00:10:18.414 { 00:10:18.414 "name": "BaseBdev4", 00:10:18.414 "aliases": [ 00:10:18.414 "021e5667-a9b8-43b4-97c1-4c3cad7d3697" 00:10:18.414 ], 00:10:18.414 "product_name": "Malloc disk", 00:10:18.414 "block_size": 512, 00:10:18.414 "num_blocks": 65536, 00:10:18.414 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:18.414 "assigned_rate_limits": { 00:10:18.414 "rw_ios_per_sec": 0, 00:10:18.414 "rw_mbytes_per_sec": 0, 00:10:18.414 "r_mbytes_per_sec": 0, 00:10:18.414 "w_mbytes_per_sec": 0 00:10:18.414 }, 00:10:18.414 "claimed": false, 00:10:18.414 "zoned": false, 00:10:18.414 "supported_io_types": { 00:10:18.414 "read": true, 00:10:18.414 "write": true, 00:10:18.414 "unmap": true, 00:10:18.414 "flush": true, 00:10:18.414 "reset": true, 00:10:18.414 "nvme_admin": false, 00:10:18.414 "nvme_io": false, 00:10:18.414 "nvme_io_md": false, 00:10:18.414 "write_zeroes": true, 00:10:18.414 "zcopy": true, 00:10:18.414 "get_zone_info": false, 00:10:18.414 "zone_management": false, 00:10:18.414 "zone_append": false, 00:10:18.414 "compare": false, 00:10:18.414 "compare_and_write": false, 00:10:18.414 "abort": true, 00:10:18.414 "seek_hole": false, 00:10:18.414 "seek_data": false, 00:10:18.414 "copy": true, 00:10:18.414 "nvme_iov_md": false 00:10:18.414 }, 00:10:18.414 "memory_domains": [ 00:10:18.414 { 00:10:18.414 "dma_device_id": "system", 00:10:18.414 "dma_device_type": 1 00:10:18.414 }, 00:10:18.414 { 00:10:18.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.414 "dma_device_type": 2 00:10:18.414 } 00:10:18.414 ], 00:10:18.414 "driver_specific": {} 00:10:18.414 } 00:10:18.414 ] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.414 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.673 [2024-09-29 21:41:37.397670] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.673 [2024-09-29 21:41:37.397784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.673 [2024-09-29 21:41:37.397826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:18.673 [2024-09-29 21:41:37.400031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:18.673 [2024-09-29 21:41:37.400142] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.673 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.673 "name": "Existed_Raid", 00:10:18.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.674 "strip_size_kb": 64, 00:10:18.674 "state": "configuring", 00:10:18.674 "raid_level": "raid0", 00:10:18.674 "superblock": false, 00:10:18.674 "num_base_bdevs": 4, 00:10:18.674 "num_base_bdevs_discovered": 3, 00:10:18.674 "num_base_bdevs_operational": 4, 00:10:18.674 "base_bdevs_list": [ 00:10:18.674 { 00:10:18.674 "name": "BaseBdev1", 00:10:18.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.674 "is_configured": false, 00:10:18.674 "data_offset": 0, 00:10:18.674 "data_size": 0 00:10:18.674 }, 00:10:18.674 { 00:10:18.674 "name": "BaseBdev2", 00:10:18.674 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:18.674 "is_configured": true, 00:10:18.674 "data_offset": 0, 00:10:18.674 "data_size": 65536 00:10:18.674 }, 00:10:18.674 { 00:10:18.674 "name": "BaseBdev3", 00:10:18.674 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:18.674 "is_configured": true, 00:10:18.674 "data_offset": 0, 00:10:18.674 "data_size": 65536 00:10:18.674 }, 00:10:18.674 { 00:10:18.674 "name": "BaseBdev4", 00:10:18.674 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:18.674 "is_configured": true, 00:10:18.674 "data_offset": 0, 00:10:18.674 "data_size": 65536 00:10:18.674 } 00:10:18.674 ] 00:10:18.674 }' 00:10:18.674 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.674 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.933 [2024-09-29 21:41:37.840975] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.933 "name": "Existed_Raid", 00:10:18.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.933 "strip_size_kb": 64, 00:10:18.933 "state": "configuring", 00:10:18.933 "raid_level": "raid0", 00:10:18.933 "superblock": false, 00:10:18.933 "num_base_bdevs": 4, 00:10:18.933 "num_base_bdevs_discovered": 2, 00:10:18.933 "num_base_bdevs_operational": 4, 00:10:18.933 "base_bdevs_list": [ 00:10:18.933 { 00:10:18.933 "name": "BaseBdev1", 00:10:18.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.933 "is_configured": false, 00:10:18.933 "data_offset": 0, 00:10:18.933 "data_size": 0 00:10:18.933 }, 00:10:18.933 { 00:10:18.933 "name": null, 00:10:18.933 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:18.933 "is_configured": false, 00:10:18.933 "data_offset": 0, 00:10:18.933 "data_size": 65536 00:10:18.933 }, 00:10:18.933 { 00:10:18.933 "name": "BaseBdev3", 00:10:18.933 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:18.933 "is_configured": true, 00:10:18.933 "data_offset": 0, 00:10:18.933 "data_size": 65536 00:10:18.933 }, 00:10:18.933 { 00:10:18.933 "name": "BaseBdev4", 00:10:18.933 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:18.933 "is_configured": true, 00:10:18.933 "data_offset": 0, 00:10:18.933 "data_size": 65536 00:10:18.933 } 00:10:18.933 ] 00:10:18.933 }' 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.933 21:41:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.503 [2024-09-29 21:41:38.378059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.503 BaseBdev1 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.503 [ 00:10:19.503 { 00:10:19.503 "name": "BaseBdev1", 00:10:19.503 "aliases": [ 00:10:19.503 "a2b35425-a844-4524-850b-7ed3b3db93ad" 00:10:19.503 ], 00:10:19.503 "product_name": "Malloc disk", 00:10:19.503 "block_size": 512, 00:10:19.503 "num_blocks": 65536, 00:10:19.503 "uuid": "a2b35425-a844-4524-850b-7ed3b3db93ad", 00:10:19.503 "assigned_rate_limits": { 00:10:19.503 "rw_ios_per_sec": 0, 00:10:19.503 "rw_mbytes_per_sec": 0, 00:10:19.503 "r_mbytes_per_sec": 0, 00:10:19.503 "w_mbytes_per_sec": 0 00:10:19.503 }, 00:10:19.503 "claimed": true, 00:10:19.503 "claim_type": "exclusive_write", 00:10:19.503 "zoned": false, 00:10:19.503 "supported_io_types": { 00:10:19.503 "read": true, 00:10:19.503 "write": true, 00:10:19.503 "unmap": true, 00:10:19.503 "flush": true, 00:10:19.503 "reset": true, 00:10:19.503 "nvme_admin": false, 00:10:19.503 "nvme_io": false, 00:10:19.503 "nvme_io_md": false, 00:10:19.503 "write_zeroes": true, 00:10:19.503 "zcopy": true, 00:10:19.503 "get_zone_info": false, 00:10:19.503 "zone_management": false, 00:10:19.503 "zone_append": false, 00:10:19.503 "compare": false, 00:10:19.503 "compare_and_write": false, 00:10:19.503 "abort": true, 00:10:19.503 "seek_hole": false, 00:10:19.503 "seek_data": false, 00:10:19.503 "copy": true, 00:10:19.503 "nvme_iov_md": false 00:10:19.503 }, 00:10:19.503 "memory_domains": [ 00:10:19.503 { 00:10:19.503 "dma_device_id": "system", 00:10:19.503 "dma_device_type": 1 00:10:19.503 }, 00:10:19.503 { 00:10:19.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.503 "dma_device_type": 2 00:10:19.503 } 00:10:19.503 ], 00:10:19.503 "driver_specific": {} 00:10:19.503 } 00:10:19.503 ] 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.503 "name": "Existed_Raid", 00:10:19.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.503 "strip_size_kb": 64, 00:10:19.503 "state": "configuring", 00:10:19.503 "raid_level": "raid0", 00:10:19.503 "superblock": false, 00:10:19.503 "num_base_bdevs": 4, 00:10:19.503 "num_base_bdevs_discovered": 3, 00:10:19.503 "num_base_bdevs_operational": 4, 00:10:19.503 "base_bdevs_list": [ 00:10:19.503 { 00:10:19.503 "name": "BaseBdev1", 00:10:19.503 "uuid": "a2b35425-a844-4524-850b-7ed3b3db93ad", 00:10:19.503 "is_configured": true, 00:10:19.503 "data_offset": 0, 00:10:19.503 "data_size": 65536 00:10:19.503 }, 00:10:19.503 { 00:10:19.503 "name": null, 00:10:19.503 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:19.503 "is_configured": false, 00:10:19.503 "data_offset": 0, 00:10:19.503 "data_size": 65536 00:10:19.503 }, 00:10:19.503 { 00:10:19.503 "name": "BaseBdev3", 00:10:19.503 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:19.503 "is_configured": true, 00:10:19.503 "data_offset": 0, 00:10:19.503 "data_size": 65536 00:10:19.503 }, 00:10:19.503 { 00:10:19.503 "name": "BaseBdev4", 00:10:19.503 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:19.503 "is_configured": true, 00:10:19.503 "data_offset": 0, 00:10:19.503 "data_size": 65536 00:10:19.503 } 00:10:19.503 ] 00:10:19.503 }' 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.503 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.071 [2024-09-29 21:41:38.885246] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.071 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.072 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.072 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.072 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.072 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.072 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.072 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.072 "name": "Existed_Raid", 00:10:20.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.072 "strip_size_kb": 64, 00:10:20.072 "state": "configuring", 00:10:20.072 "raid_level": "raid0", 00:10:20.072 "superblock": false, 00:10:20.072 "num_base_bdevs": 4, 00:10:20.072 "num_base_bdevs_discovered": 2, 00:10:20.072 "num_base_bdevs_operational": 4, 00:10:20.072 "base_bdevs_list": [ 00:10:20.072 { 00:10:20.072 "name": "BaseBdev1", 00:10:20.072 "uuid": "a2b35425-a844-4524-850b-7ed3b3db93ad", 00:10:20.072 "is_configured": true, 00:10:20.072 "data_offset": 0, 00:10:20.072 "data_size": 65536 00:10:20.072 }, 00:10:20.072 { 00:10:20.072 "name": null, 00:10:20.072 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:20.072 "is_configured": false, 00:10:20.072 "data_offset": 0, 00:10:20.072 "data_size": 65536 00:10:20.072 }, 00:10:20.072 { 00:10:20.072 "name": null, 00:10:20.072 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:20.072 "is_configured": false, 00:10:20.072 "data_offset": 0, 00:10:20.072 "data_size": 65536 00:10:20.072 }, 00:10:20.072 { 00:10:20.072 "name": "BaseBdev4", 00:10:20.072 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:20.072 "is_configured": true, 00:10:20.072 "data_offset": 0, 00:10:20.072 "data_size": 65536 00:10:20.072 } 00:10:20.072 ] 00:10:20.072 }' 00:10:20.072 21:41:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.072 21:41:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.639 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.639 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.639 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.639 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.639 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.639 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:20.639 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:20.639 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.640 [2024-09-29 21:41:39.400402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.640 "name": "Existed_Raid", 00:10:20.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.640 "strip_size_kb": 64, 00:10:20.640 "state": "configuring", 00:10:20.640 "raid_level": "raid0", 00:10:20.640 "superblock": false, 00:10:20.640 "num_base_bdevs": 4, 00:10:20.640 "num_base_bdevs_discovered": 3, 00:10:20.640 "num_base_bdevs_operational": 4, 00:10:20.640 "base_bdevs_list": [ 00:10:20.640 { 00:10:20.640 "name": "BaseBdev1", 00:10:20.640 "uuid": "a2b35425-a844-4524-850b-7ed3b3db93ad", 00:10:20.640 "is_configured": true, 00:10:20.640 "data_offset": 0, 00:10:20.640 "data_size": 65536 00:10:20.640 }, 00:10:20.640 { 00:10:20.640 "name": null, 00:10:20.640 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:20.640 "is_configured": false, 00:10:20.640 "data_offset": 0, 00:10:20.640 "data_size": 65536 00:10:20.640 }, 00:10:20.640 { 00:10:20.640 "name": "BaseBdev3", 00:10:20.640 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:20.640 "is_configured": true, 00:10:20.640 "data_offset": 0, 00:10:20.640 "data_size": 65536 00:10:20.640 }, 00:10:20.640 { 00:10:20.640 "name": "BaseBdev4", 00:10:20.640 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:20.640 "is_configured": true, 00:10:20.640 "data_offset": 0, 00:10:20.640 "data_size": 65536 00:10:20.640 } 00:10:20.640 ] 00:10:20.640 }' 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.640 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.899 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.899 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:20.899 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.899 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.899 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.159 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:21.159 21:41:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:21.159 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.159 21:41:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.159 [2024-09-29 21:41:39.899624] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.159 "name": "Existed_Raid", 00:10:21.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.159 "strip_size_kb": 64, 00:10:21.159 "state": "configuring", 00:10:21.159 "raid_level": "raid0", 00:10:21.159 "superblock": false, 00:10:21.159 "num_base_bdevs": 4, 00:10:21.159 "num_base_bdevs_discovered": 2, 00:10:21.159 "num_base_bdevs_operational": 4, 00:10:21.159 "base_bdevs_list": [ 00:10:21.159 { 00:10:21.159 "name": null, 00:10:21.159 "uuid": "a2b35425-a844-4524-850b-7ed3b3db93ad", 00:10:21.159 "is_configured": false, 00:10:21.159 "data_offset": 0, 00:10:21.159 "data_size": 65536 00:10:21.159 }, 00:10:21.159 { 00:10:21.159 "name": null, 00:10:21.159 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:21.159 "is_configured": false, 00:10:21.159 "data_offset": 0, 00:10:21.159 "data_size": 65536 00:10:21.159 }, 00:10:21.159 { 00:10:21.159 "name": "BaseBdev3", 00:10:21.159 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:21.159 "is_configured": true, 00:10:21.159 "data_offset": 0, 00:10:21.159 "data_size": 65536 00:10:21.159 }, 00:10:21.159 { 00:10:21.159 "name": "BaseBdev4", 00:10:21.159 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:21.159 "is_configured": true, 00:10:21.159 "data_offset": 0, 00:10:21.159 "data_size": 65536 00:10:21.159 } 00:10:21.159 ] 00:10:21.159 }' 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.159 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.728 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.729 [2024-09-29 21:41:40.472054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.729 "name": "Existed_Raid", 00:10:21.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.729 "strip_size_kb": 64, 00:10:21.729 "state": "configuring", 00:10:21.729 "raid_level": "raid0", 00:10:21.729 "superblock": false, 00:10:21.729 "num_base_bdevs": 4, 00:10:21.729 "num_base_bdevs_discovered": 3, 00:10:21.729 "num_base_bdevs_operational": 4, 00:10:21.729 "base_bdevs_list": [ 00:10:21.729 { 00:10:21.729 "name": null, 00:10:21.729 "uuid": "a2b35425-a844-4524-850b-7ed3b3db93ad", 00:10:21.729 "is_configured": false, 00:10:21.729 "data_offset": 0, 00:10:21.729 "data_size": 65536 00:10:21.729 }, 00:10:21.729 { 00:10:21.729 "name": "BaseBdev2", 00:10:21.729 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:21.729 "is_configured": true, 00:10:21.729 "data_offset": 0, 00:10:21.729 "data_size": 65536 00:10:21.729 }, 00:10:21.729 { 00:10:21.729 "name": "BaseBdev3", 00:10:21.729 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:21.729 "is_configured": true, 00:10:21.729 "data_offset": 0, 00:10:21.729 "data_size": 65536 00:10:21.729 }, 00:10:21.729 { 00:10:21.729 "name": "BaseBdev4", 00:10:21.729 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:21.729 "is_configured": true, 00:10:21.729 "data_offset": 0, 00:10:21.729 "data_size": 65536 00:10:21.729 } 00:10:21.729 ] 00:10:21.729 }' 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.729 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.988 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:21.988 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.988 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.988 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.247 21:41:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.247 21:41:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a2b35425-a844-4524-850b-7ed3b3db93ad 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.247 [2024-09-29 21:41:41.072424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:22.247 [2024-09-29 21:41:41.072476] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:22.247 [2024-09-29 21:41:41.072484] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:22.247 [2024-09-29 21:41:41.072800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:22.247 [2024-09-29 21:41:41.072949] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:22.247 [2024-09-29 21:41:41.072962] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:22.247 [2024-09-29 21:41:41.073228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.247 NewBaseBdev 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.247 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.247 [ 00:10:22.247 { 00:10:22.247 "name": "NewBaseBdev", 00:10:22.247 "aliases": [ 00:10:22.247 "a2b35425-a844-4524-850b-7ed3b3db93ad" 00:10:22.247 ], 00:10:22.247 "product_name": "Malloc disk", 00:10:22.247 "block_size": 512, 00:10:22.247 "num_blocks": 65536, 00:10:22.247 "uuid": "a2b35425-a844-4524-850b-7ed3b3db93ad", 00:10:22.247 "assigned_rate_limits": { 00:10:22.247 "rw_ios_per_sec": 0, 00:10:22.247 "rw_mbytes_per_sec": 0, 00:10:22.247 "r_mbytes_per_sec": 0, 00:10:22.248 "w_mbytes_per_sec": 0 00:10:22.248 }, 00:10:22.248 "claimed": true, 00:10:22.248 "claim_type": "exclusive_write", 00:10:22.248 "zoned": false, 00:10:22.248 "supported_io_types": { 00:10:22.248 "read": true, 00:10:22.248 "write": true, 00:10:22.248 "unmap": true, 00:10:22.248 "flush": true, 00:10:22.248 "reset": true, 00:10:22.248 "nvme_admin": false, 00:10:22.248 "nvme_io": false, 00:10:22.248 "nvme_io_md": false, 00:10:22.248 "write_zeroes": true, 00:10:22.248 "zcopy": true, 00:10:22.248 "get_zone_info": false, 00:10:22.248 "zone_management": false, 00:10:22.248 "zone_append": false, 00:10:22.248 "compare": false, 00:10:22.248 "compare_and_write": false, 00:10:22.248 "abort": true, 00:10:22.248 "seek_hole": false, 00:10:22.248 "seek_data": false, 00:10:22.248 "copy": true, 00:10:22.248 "nvme_iov_md": false 00:10:22.248 }, 00:10:22.248 "memory_domains": [ 00:10:22.248 { 00:10:22.248 "dma_device_id": "system", 00:10:22.248 "dma_device_type": 1 00:10:22.248 }, 00:10:22.248 { 00:10:22.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.248 "dma_device_type": 2 00:10:22.248 } 00:10:22.248 ], 00:10:22.248 "driver_specific": {} 00:10:22.248 } 00:10:22.248 ] 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.248 "name": "Existed_Raid", 00:10:22.248 "uuid": "c986fa86-63e6-4c4a-9783-0e45ecb9135f", 00:10:22.248 "strip_size_kb": 64, 00:10:22.248 "state": "online", 00:10:22.248 "raid_level": "raid0", 00:10:22.248 "superblock": false, 00:10:22.248 "num_base_bdevs": 4, 00:10:22.248 "num_base_bdevs_discovered": 4, 00:10:22.248 "num_base_bdevs_operational": 4, 00:10:22.248 "base_bdevs_list": [ 00:10:22.248 { 00:10:22.248 "name": "NewBaseBdev", 00:10:22.248 "uuid": "a2b35425-a844-4524-850b-7ed3b3db93ad", 00:10:22.248 "is_configured": true, 00:10:22.248 "data_offset": 0, 00:10:22.248 "data_size": 65536 00:10:22.248 }, 00:10:22.248 { 00:10:22.248 "name": "BaseBdev2", 00:10:22.248 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:22.248 "is_configured": true, 00:10:22.248 "data_offset": 0, 00:10:22.248 "data_size": 65536 00:10:22.248 }, 00:10:22.248 { 00:10:22.248 "name": "BaseBdev3", 00:10:22.248 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:22.248 "is_configured": true, 00:10:22.248 "data_offset": 0, 00:10:22.248 "data_size": 65536 00:10:22.248 }, 00:10:22.248 { 00:10:22.248 "name": "BaseBdev4", 00:10:22.248 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:22.248 "is_configured": true, 00:10:22.248 "data_offset": 0, 00:10:22.248 "data_size": 65536 00:10:22.248 } 00:10:22.248 ] 00:10:22.248 }' 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.248 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.817 [2024-09-29 21:41:41.528110] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.817 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.817 "name": "Existed_Raid", 00:10:22.817 "aliases": [ 00:10:22.817 "c986fa86-63e6-4c4a-9783-0e45ecb9135f" 00:10:22.817 ], 00:10:22.817 "product_name": "Raid Volume", 00:10:22.817 "block_size": 512, 00:10:22.817 "num_blocks": 262144, 00:10:22.817 "uuid": "c986fa86-63e6-4c4a-9783-0e45ecb9135f", 00:10:22.817 "assigned_rate_limits": { 00:10:22.817 "rw_ios_per_sec": 0, 00:10:22.817 "rw_mbytes_per_sec": 0, 00:10:22.817 "r_mbytes_per_sec": 0, 00:10:22.817 "w_mbytes_per_sec": 0 00:10:22.817 }, 00:10:22.817 "claimed": false, 00:10:22.817 "zoned": false, 00:10:22.817 "supported_io_types": { 00:10:22.817 "read": true, 00:10:22.817 "write": true, 00:10:22.817 "unmap": true, 00:10:22.817 "flush": true, 00:10:22.817 "reset": true, 00:10:22.817 "nvme_admin": false, 00:10:22.817 "nvme_io": false, 00:10:22.817 "nvme_io_md": false, 00:10:22.817 "write_zeroes": true, 00:10:22.817 "zcopy": false, 00:10:22.817 "get_zone_info": false, 00:10:22.817 "zone_management": false, 00:10:22.817 "zone_append": false, 00:10:22.817 "compare": false, 00:10:22.817 "compare_and_write": false, 00:10:22.817 "abort": false, 00:10:22.817 "seek_hole": false, 00:10:22.817 "seek_data": false, 00:10:22.817 "copy": false, 00:10:22.817 "nvme_iov_md": false 00:10:22.817 }, 00:10:22.817 "memory_domains": [ 00:10:22.817 { 00:10:22.817 "dma_device_id": "system", 00:10:22.817 "dma_device_type": 1 00:10:22.817 }, 00:10:22.817 { 00:10:22.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.817 "dma_device_type": 2 00:10:22.817 }, 00:10:22.817 { 00:10:22.817 "dma_device_id": "system", 00:10:22.817 "dma_device_type": 1 00:10:22.817 }, 00:10:22.817 { 00:10:22.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.817 "dma_device_type": 2 00:10:22.817 }, 00:10:22.817 { 00:10:22.817 "dma_device_id": "system", 00:10:22.817 "dma_device_type": 1 00:10:22.817 }, 00:10:22.817 { 00:10:22.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.817 "dma_device_type": 2 00:10:22.817 }, 00:10:22.817 { 00:10:22.817 "dma_device_id": "system", 00:10:22.817 "dma_device_type": 1 00:10:22.817 }, 00:10:22.817 { 00:10:22.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.817 "dma_device_type": 2 00:10:22.817 } 00:10:22.817 ], 00:10:22.817 "driver_specific": { 00:10:22.817 "raid": { 00:10:22.817 "uuid": "c986fa86-63e6-4c4a-9783-0e45ecb9135f", 00:10:22.817 "strip_size_kb": 64, 00:10:22.817 "state": "online", 00:10:22.817 "raid_level": "raid0", 00:10:22.817 "superblock": false, 00:10:22.817 "num_base_bdevs": 4, 00:10:22.817 "num_base_bdevs_discovered": 4, 00:10:22.817 "num_base_bdevs_operational": 4, 00:10:22.817 "base_bdevs_list": [ 00:10:22.817 { 00:10:22.817 "name": "NewBaseBdev", 00:10:22.817 "uuid": "a2b35425-a844-4524-850b-7ed3b3db93ad", 00:10:22.817 "is_configured": true, 00:10:22.817 "data_offset": 0, 00:10:22.817 "data_size": 65536 00:10:22.817 }, 00:10:22.817 { 00:10:22.817 "name": "BaseBdev2", 00:10:22.818 "uuid": "5aeb3abc-e8d8-47bf-b218-d66dc7c0f5ff", 00:10:22.818 "is_configured": true, 00:10:22.818 "data_offset": 0, 00:10:22.818 "data_size": 65536 00:10:22.818 }, 00:10:22.818 { 00:10:22.818 "name": "BaseBdev3", 00:10:22.818 "uuid": "bb3718a3-e4ca-4485-ac64-e891e9658ab8", 00:10:22.818 "is_configured": true, 00:10:22.818 "data_offset": 0, 00:10:22.818 "data_size": 65536 00:10:22.818 }, 00:10:22.818 { 00:10:22.818 "name": "BaseBdev4", 00:10:22.818 "uuid": "021e5667-a9b8-43b4-97c1-4c3cad7d3697", 00:10:22.818 "is_configured": true, 00:10:22.818 "data_offset": 0, 00:10:22.818 "data_size": 65536 00:10:22.818 } 00:10:22.818 ] 00:10:22.818 } 00:10:22.818 } 00:10:22.818 }' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:22.818 BaseBdev2 00:10:22.818 BaseBdev3 00:10:22.818 BaseBdev4' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.818 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.078 [2024-09-29 21:41:41.843181] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.078 [2024-09-29 21:41:41.843258] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.078 [2024-09-29 21:41:41.843353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.078 [2024-09-29 21:41:41.843443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.078 [2024-09-29 21:41:41.843486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69458 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69458 ']' 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69458 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69458 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69458' 00:10:23.078 killing process with pid 69458 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69458 00:10:23.078 [2024-09-29 21:41:41.893899] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.078 21:41:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69458 00:10:23.343 [2024-09-29 21:41:42.312063] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.747 21:41:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:24.747 00:10:24.747 real 0m11.759s 00:10:24.747 user 0m18.254s 00:10:24.747 sys 0m2.219s 00:10:24.747 21:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.747 ************************************ 00:10:24.747 END TEST raid_state_function_test 00:10:24.747 ************************************ 00:10:24.747 21:41:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.747 21:41:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:24.747 21:41:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:24.747 21:41:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.747 21:41:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.747 ************************************ 00:10:24.747 START TEST raid_state_function_test_sb 00:10:24.747 ************************************ 00:10:24.747 21:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:10:24.747 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:24.747 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:24.747 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:24.747 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:25.007 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70129 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70129' 00:10:25.008 Process raid pid: 70129 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70129 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70129 ']' 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:25.008 21:41:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.008 [2024-09-29 21:41:43.826799] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:25.008 [2024-09-29 21:41:43.826981] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.267 [2024-09-29 21:41:43.991601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.267 [2024-09-29 21:41:44.237195] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.527 [2024-09-29 21:41:44.472437] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.527 [2024-09-29 21:41:44.472567] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.786 [2024-09-29 21:41:44.652758] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.786 [2024-09-29 21:41:44.652822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.786 [2024-09-29 21:41:44.652833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.786 [2024-09-29 21:41:44.652843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.786 [2024-09-29 21:41:44.652849] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.786 [2024-09-29 21:41:44.652859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.786 [2024-09-29 21:41:44.652865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.786 [2024-09-29 21:41:44.652873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.786 "name": "Existed_Raid", 00:10:25.786 "uuid": "3da74d02-42e5-4e8a-9550-8fd7a096a5e9", 00:10:25.786 "strip_size_kb": 64, 00:10:25.786 "state": "configuring", 00:10:25.786 "raid_level": "raid0", 00:10:25.786 "superblock": true, 00:10:25.786 "num_base_bdevs": 4, 00:10:25.786 "num_base_bdevs_discovered": 0, 00:10:25.786 "num_base_bdevs_operational": 4, 00:10:25.786 "base_bdevs_list": [ 00:10:25.786 { 00:10:25.786 "name": "BaseBdev1", 00:10:25.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.786 "is_configured": false, 00:10:25.786 "data_offset": 0, 00:10:25.786 "data_size": 0 00:10:25.786 }, 00:10:25.786 { 00:10:25.786 "name": "BaseBdev2", 00:10:25.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.786 "is_configured": false, 00:10:25.786 "data_offset": 0, 00:10:25.786 "data_size": 0 00:10:25.786 }, 00:10:25.786 { 00:10:25.786 "name": "BaseBdev3", 00:10:25.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.786 "is_configured": false, 00:10:25.786 "data_offset": 0, 00:10:25.786 "data_size": 0 00:10:25.786 }, 00:10:25.786 { 00:10:25.786 "name": "BaseBdev4", 00:10:25.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.786 "is_configured": false, 00:10:25.786 "data_offset": 0, 00:10:25.786 "data_size": 0 00:10:25.786 } 00:10:25.786 ] 00:10:25.786 }' 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.786 21:41:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.356 [2024-09-29 21:41:45.056002] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.356 [2024-09-29 21:41:45.056130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.356 [2024-09-29 21:41:45.067999] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.356 [2024-09-29 21:41:45.068108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.356 [2024-09-29 21:41:45.068139] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.356 [2024-09-29 21:41:45.068169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.356 [2024-09-29 21:41:45.068187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.356 [2024-09-29 21:41:45.068208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.356 [2024-09-29 21:41:45.068230] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.356 [2024-09-29 21:41:45.068251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.356 [2024-09-29 21:41:45.155483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.356 BaseBdev1 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.356 [ 00:10:26.356 { 00:10:26.356 "name": "BaseBdev1", 00:10:26.356 "aliases": [ 00:10:26.356 "cf374be8-29d0-4aa3-8b12-b8cc08663f44" 00:10:26.356 ], 00:10:26.356 "product_name": "Malloc disk", 00:10:26.356 "block_size": 512, 00:10:26.356 "num_blocks": 65536, 00:10:26.356 "uuid": "cf374be8-29d0-4aa3-8b12-b8cc08663f44", 00:10:26.356 "assigned_rate_limits": { 00:10:26.356 "rw_ios_per_sec": 0, 00:10:26.356 "rw_mbytes_per_sec": 0, 00:10:26.356 "r_mbytes_per_sec": 0, 00:10:26.356 "w_mbytes_per_sec": 0 00:10:26.356 }, 00:10:26.356 "claimed": true, 00:10:26.356 "claim_type": "exclusive_write", 00:10:26.356 "zoned": false, 00:10:26.356 "supported_io_types": { 00:10:26.356 "read": true, 00:10:26.356 "write": true, 00:10:26.356 "unmap": true, 00:10:26.356 "flush": true, 00:10:26.356 "reset": true, 00:10:26.356 "nvme_admin": false, 00:10:26.356 "nvme_io": false, 00:10:26.356 "nvme_io_md": false, 00:10:26.356 "write_zeroes": true, 00:10:26.356 "zcopy": true, 00:10:26.356 "get_zone_info": false, 00:10:26.356 "zone_management": false, 00:10:26.356 "zone_append": false, 00:10:26.356 "compare": false, 00:10:26.356 "compare_and_write": false, 00:10:26.356 "abort": true, 00:10:26.356 "seek_hole": false, 00:10:26.356 "seek_data": false, 00:10:26.356 "copy": true, 00:10:26.356 "nvme_iov_md": false 00:10:26.356 }, 00:10:26.356 "memory_domains": [ 00:10:26.356 { 00:10:26.356 "dma_device_id": "system", 00:10:26.356 "dma_device_type": 1 00:10:26.356 }, 00:10:26.356 { 00:10:26.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.356 "dma_device_type": 2 00:10:26.356 } 00:10:26.356 ], 00:10:26.356 "driver_specific": {} 00:10:26.356 } 00:10:26.356 ] 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.356 "name": "Existed_Raid", 00:10:26.356 "uuid": "2be7cf4c-71cb-4a38-b874-81acd4e0ed20", 00:10:26.356 "strip_size_kb": 64, 00:10:26.356 "state": "configuring", 00:10:26.356 "raid_level": "raid0", 00:10:26.356 "superblock": true, 00:10:26.356 "num_base_bdevs": 4, 00:10:26.356 "num_base_bdevs_discovered": 1, 00:10:26.356 "num_base_bdevs_operational": 4, 00:10:26.356 "base_bdevs_list": [ 00:10:26.356 { 00:10:26.356 "name": "BaseBdev1", 00:10:26.356 "uuid": "cf374be8-29d0-4aa3-8b12-b8cc08663f44", 00:10:26.356 "is_configured": true, 00:10:26.356 "data_offset": 2048, 00:10:26.356 "data_size": 63488 00:10:26.356 }, 00:10:26.356 { 00:10:26.356 "name": "BaseBdev2", 00:10:26.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.356 "is_configured": false, 00:10:26.356 "data_offset": 0, 00:10:26.356 "data_size": 0 00:10:26.356 }, 00:10:26.356 { 00:10:26.356 "name": "BaseBdev3", 00:10:26.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.356 "is_configured": false, 00:10:26.356 "data_offset": 0, 00:10:26.356 "data_size": 0 00:10:26.356 }, 00:10:26.356 { 00:10:26.356 "name": "BaseBdev4", 00:10:26.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.356 "is_configured": false, 00:10:26.356 "data_offset": 0, 00:10:26.356 "data_size": 0 00:10:26.356 } 00:10:26.356 ] 00:10:26.356 }' 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.356 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.616 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.616 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.616 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.875 [2024-09-29 21:41:45.602748] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.875 [2024-09-29 21:41:45.602794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:26.875 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.875 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.875 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.875 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 [2024-09-29 21:41:45.614807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.876 [2024-09-29 21:41:45.616952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.876 [2024-09-29 21:41:45.616997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.876 [2024-09-29 21:41:45.617008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.876 [2024-09-29 21:41:45.617019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.876 [2024-09-29 21:41:45.617026] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.876 [2024-09-29 21:41:45.617050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.876 "name": "Existed_Raid", 00:10:26.876 "uuid": "036a50e9-8d42-4593-81ac-733a93d067ef", 00:10:26.876 "strip_size_kb": 64, 00:10:26.876 "state": "configuring", 00:10:26.876 "raid_level": "raid0", 00:10:26.876 "superblock": true, 00:10:26.876 "num_base_bdevs": 4, 00:10:26.876 "num_base_bdevs_discovered": 1, 00:10:26.876 "num_base_bdevs_operational": 4, 00:10:26.876 "base_bdevs_list": [ 00:10:26.876 { 00:10:26.876 "name": "BaseBdev1", 00:10:26.876 "uuid": "cf374be8-29d0-4aa3-8b12-b8cc08663f44", 00:10:26.876 "is_configured": true, 00:10:26.876 "data_offset": 2048, 00:10:26.876 "data_size": 63488 00:10:26.876 }, 00:10:26.876 { 00:10:26.876 "name": "BaseBdev2", 00:10:26.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.876 "is_configured": false, 00:10:26.876 "data_offset": 0, 00:10:26.876 "data_size": 0 00:10:26.876 }, 00:10:26.876 { 00:10:26.876 "name": "BaseBdev3", 00:10:26.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.876 "is_configured": false, 00:10:26.876 "data_offset": 0, 00:10:26.876 "data_size": 0 00:10:26.876 }, 00:10:26.876 { 00:10:26.876 "name": "BaseBdev4", 00:10:26.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.876 "is_configured": false, 00:10:26.876 "data_offset": 0, 00:10:26.876 "data_size": 0 00:10:26.876 } 00:10:26.876 ] 00:10:26.876 }' 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.876 21:41:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.136 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.136 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.136 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.395 [2024-09-29 21:41:46.123022] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.395 BaseBdev2 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.395 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.395 [ 00:10:27.395 { 00:10:27.395 "name": "BaseBdev2", 00:10:27.395 "aliases": [ 00:10:27.395 "080b55d2-3773-41ff-b2b6-52b0d83de814" 00:10:27.395 ], 00:10:27.395 "product_name": "Malloc disk", 00:10:27.395 "block_size": 512, 00:10:27.395 "num_blocks": 65536, 00:10:27.395 "uuid": "080b55d2-3773-41ff-b2b6-52b0d83de814", 00:10:27.395 "assigned_rate_limits": { 00:10:27.395 "rw_ios_per_sec": 0, 00:10:27.395 "rw_mbytes_per_sec": 0, 00:10:27.395 "r_mbytes_per_sec": 0, 00:10:27.395 "w_mbytes_per_sec": 0 00:10:27.395 }, 00:10:27.395 "claimed": true, 00:10:27.395 "claim_type": "exclusive_write", 00:10:27.395 "zoned": false, 00:10:27.395 "supported_io_types": { 00:10:27.395 "read": true, 00:10:27.395 "write": true, 00:10:27.396 "unmap": true, 00:10:27.396 "flush": true, 00:10:27.396 "reset": true, 00:10:27.396 "nvme_admin": false, 00:10:27.396 "nvme_io": false, 00:10:27.396 "nvme_io_md": false, 00:10:27.396 "write_zeroes": true, 00:10:27.396 "zcopy": true, 00:10:27.396 "get_zone_info": false, 00:10:27.396 "zone_management": false, 00:10:27.396 "zone_append": false, 00:10:27.396 "compare": false, 00:10:27.396 "compare_and_write": false, 00:10:27.396 "abort": true, 00:10:27.396 "seek_hole": false, 00:10:27.396 "seek_data": false, 00:10:27.396 "copy": true, 00:10:27.396 "nvme_iov_md": false 00:10:27.396 }, 00:10:27.396 "memory_domains": [ 00:10:27.396 { 00:10:27.396 "dma_device_id": "system", 00:10:27.396 "dma_device_type": 1 00:10:27.396 }, 00:10:27.396 { 00:10:27.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.396 "dma_device_type": 2 00:10:27.396 } 00:10:27.396 ], 00:10:27.396 "driver_specific": {} 00:10:27.396 } 00:10:27.396 ] 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.396 "name": "Existed_Raid", 00:10:27.396 "uuid": "036a50e9-8d42-4593-81ac-733a93d067ef", 00:10:27.396 "strip_size_kb": 64, 00:10:27.396 "state": "configuring", 00:10:27.396 "raid_level": "raid0", 00:10:27.396 "superblock": true, 00:10:27.396 "num_base_bdevs": 4, 00:10:27.396 "num_base_bdevs_discovered": 2, 00:10:27.396 "num_base_bdevs_operational": 4, 00:10:27.396 "base_bdevs_list": [ 00:10:27.396 { 00:10:27.396 "name": "BaseBdev1", 00:10:27.396 "uuid": "cf374be8-29d0-4aa3-8b12-b8cc08663f44", 00:10:27.396 "is_configured": true, 00:10:27.396 "data_offset": 2048, 00:10:27.396 "data_size": 63488 00:10:27.396 }, 00:10:27.396 { 00:10:27.396 "name": "BaseBdev2", 00:10:27.396 "uuid": "080b55d2-3773-41ff-b2b6-52b0d83de814", 00:10:27.396 "is_configured": true, 00:10:27.396 "data_offset": 2048, 00:10:27.396 "data_size": 63488 00:10:27.396 }, 00:10:27.396 { 00:10:27.396 "name": "BaseBdev3", 00:10:27.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.396 "is_configured": false, 00:10:27.396 "data_offset": 0, 00:10:27.396 "data_size": 0 00:10:27.396 }, 00:10:27.396 { 00:10:27.396 "name": "BaseBdev4", 00:10:27.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.396 "is_configured": false, 00:10:27.396 "data_offset": 0, 00:10:27.396 "data_size": 0 00:10:27.396 } 00:10:27.396 ] 00:10:27.396 }' 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.396 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.656 [2024-09-29 21:41:46.605380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.656 BaseBdev3 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.656 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.656 [ 00:10:27.656 { 00:10:27.656 "name": "BaseBdev3", 00:10:27.656 "aliases": [ 00:10:27.656 "27f52c43-8cc2-46c2-8347-d635ebd20ab5" 00:10:27.656 ], 00:10:27.656 "product_name": "Malloc disk", 00:10:27.656 "block_size": 512, 00:10:27.656 "num_blocks": 65536, 00:10:27.656 "uuid": "27f52c43-8cc2-46c2-8347-d635ebd20ab5", 00:10:27.656 "assigned_rate_limits": { 00:10:27.656 "rw_ios_per_sec": 0, 00:10:27.656 "rw_mbytes_per_sec": 0, 00:10:27.656 "r_mbytes_per_sec": 0, 00:10:27.656 "w_mbytes_per_sec": 0 00:10:27.656 }, 00:10:27.656 "claimed": true, 00:10:27.656 "claim_type": "exclusive_write", 00:10:27.656 "zoned": false, 00:10:27.656 "supported_io_types": { 00:10:27.915 "read": true, 00:10:27.916 "write": true, 00:10:27.916 "unmap": true, 00:10:27.916 "flush": true, 00:10:27.916 "reset": true, 00:10:27.916 "nvme_admin": false, 00:10:27.916 "nvme_io": false, 00:10:27.916 "nvme_io_md": false, 00:10:27.916 "write_zeroes": true, 00:10:27.916 "zcopy": true, 00:10:27.916 "get_zone_info": false, 00:10:27.916 "zone_management": false, 00:10:27.916 "zone_append": false, 00:10:27.916 "compare": false, 00:10:27.916 "compare_and_write": false, 00:10:27.916 "abort": true, 00:10:27.916 "seek_hole": false, 00:10:27.916 "seek_data": false, 00:10:27.916 "copy": true, 00:10:27.916 "nvme_iov_md": false 00:10:27.916 }, 00:10:27.916 "memory_domains": [ 00:10:27.916 { 00:10:27.916 "dma_device_id": "system", 00:10:27.916 "dma_device_type": 1 00:10:27.916 }, 00:10:27.916 { 00:10:27.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.916 "dma_device_type": 2 00:10:27.916 } 00:10:27.916 ], 00:10:27.916 "driver_specific": {} 00:10:27.916 } 00:10:27.916 ] 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.916 "name": "Existed_Raid", 00:10:27.916 "uuid": "036a50e9-8d42-4593-81ac-733a93d067ef", 00:10:27.916 "strip_size_kb": 64, 00:10:27.916 "state": "configuring", 00:10:27.916 "raid_level": "raid0", 00:10:27.916 "superblock": true, 00:10:27.916 "num_base_bdevs": 4, 00:10:27.916 "num_base_bdevs_discovered": 3, 00:10:27.916 "num_base_bdevs_operational": 4, 00:10:27.916 "base_bdevs_list": [ 00:10:27.916 { 00:10:27.916 "name": "BaseBdev1", 00:10:27.916 "uuid": "cf374be8-29d0-4aa3-8b12-b8cc08663f44", 00:10:27.916 "is_configured": true, 00:10:27.916 "data_offset": 2048, 00:10:27.916 "data_size": 63488 00:10:27.916 }, 00:10:27.916 { 00:10:27.916 "name": "BaseBdev2", 00:10:27.916 "uuid": "080b55d2-3773-41ff-b2b6-52b0d83de814", 00:10:27.916 "is_configured": true, 00:10:27.916 "data_offset": 2048, 00:10:27.916 "data_size": 63488 00:10:27.916 }, 00:10:27.916 { 00:10:27.916 "name": "BaseBdev3", 00:10:27.916 "uuid": "27f52c43-8cc2-46c2-8347-d635ebd20ab5", 00:10:27.916 "is_configured": true, 00:10:27.916 "data_offset": 2048, 00:10:27.916 "data_size": 63488 00:10:27.916 }, 00:10:27.916 { 00:10:27.916 "name": "BaseBdev4", 00:10:27.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.916 "is_configured": false, 00:10:27.916 "data_offset": 0, 00:10:27.916 "data_size": 0 00:10:27.916 } 00:10:27.916 ] 00:10:27.916 }' 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.916 21:41:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.175 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:28.175 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.175 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.435 [2024-09-29 21:41:47.160516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.435 [2024-09-29 21:41:47.160909] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:28.435 [2024-09-29 21:41:47.160968] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:28.435 [2024-09-29 21:41:47.161317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:28.435 BaseBdev4 00:10:28.435 [2024-09-29 21:41:47.161546] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:28.435 [2024-09-29 21:41:47.161563] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:28.435 [2024-09-29 21:41:47.161723] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.435 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.435 [ 00:10:28.435 { 00:10:28.435 "name": "BaseBdev4", 00:10:28.435 "aliases": [ 00:10:28.435 "bc509824-7a1c-41c8-9e0d-a83d9303dd53" 00:10:28.435 ], 00:10:28.435 "product_name": "Malloc disk", 00:10:28.435 "block_size": 512, 00:10:28.435 "num_blocks": 65536, 00:10:28.435 "uuid": "bc509824-7a1c-41c8-9e0d-a83d9303dd53", 00:10:28.435 "assigned_rate_limits": { 00:10:28.435 "rw_ios_per_sec": 0, 00:10:28.435 "rw_mbytes_per_sec": 0, 00:10:28.435 "r_mbytes_per_sec": 0, 00:10:28.435 "w_mbytes_per_sec": 0 00:10:28.435 }, 00:10:28.435 "claimed": true, 00:10:28.435 "claim_type": "exclusive_write", 00:10:28.435 "zoned": false, 00:10:28.435 "supported_io_types": { 00:10:28.435 "read": true, 00:10:28.435 "write": true, 00:10:28.435 "unmap": true, 00:10:28.435 "flush": true, 00:10:28.435 "reset": true, 00:10:28.436 "nvme_admin": false, 00:10:28.436 "nvme_io": false, 00:10:28.436 "nvme_io_md": false, 00:10:28.436 "write_zeroes": true, 00:10:28.436 "zcopy": true, 00:10:28.436 "get_zone_info": false, 00:10:28.436 "zone_management": false, 00:10:28.436 "zone_append": false, 00:10:28.436 "compare": false, 00:10:28.436 "compare_and_write": false, 00:10:28.436 "abort": true, 00:10:28.436 "seek_hole": false, 00:10:28.436 "seek_data": false, 00:10:28.436 "copy": true, 00:10:28.436 "nvme_iov_md": false 00:10:28.436 }, 00:10:28.436 "memory_domains": [ 00:10:28.436 { 00:10:28.436 "dma_device_id": "system", 00:10:28.436 "dma_device_type": 1 00:10:28.436 }, 00:10:28.436 { 00:10:28.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.436 "dma_device_type": 2 00:10:28.436 } 00:10:28.436 ], 00:10:28.436 "driver_specific": {} 00:10:28.436 } 00:10:28.436 ] 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.436 "name": "Existed_Raid", 00:10:28.436 "uuid": "036a50e9-8d42-4593-81ac-733a93d067ef", 00:10:28.436 "strip_size_kb": 64, 00:10:28.436 "state": "online", 00:10:28.436 "raid_level": "raid0", 00:10:28.436 "superblock": true, 00:10:28.436 "num_base_bdevs": 4, 00:10:28.436 "num_base_bdevs_discovered": 4, 00:10:28.436 "num_base_bdevs_operational": 4, 00:10:28.436 "base_bdevs_list": [ 00:10:28.436 { 00:10:28.436 "name": "BaseBdev1", 00:10:28.436 "uuid": "cf374be8-29d0-4aa3-8b12-b8cc08663f44", 00:10:28.436 "is_configured": true, 00:10:28.436 "data_offset": 2048, 00:10:28.436 "data_size": 63488 00:10:28.436 }, 00:10:28.436 { 00:10:28.436 "name": "BaseBdev2", 00:10:28.436 "uuid": "080b55d2-3773-41ff-b2b6-52b0d83de814", 00:10:28.436 "is_configured": true, 00:10:28.436 "data_offset": 2048, 00:10:28.436 "data_size": 63488 00:10:28.436 }, 00:10:28.436 { 00:10:28.436 "name": "BaseBdev3", 00:10:28.436 "uuid": "27f52c43-8cc2-46c2-8347-d635ebd20ab5", 00:10:28.436 "is_configured": true, 00:10:28.436 "data_offset": 2048, 00:10:28.436 "data_size": 63488 00:10:28.436 }, 00:10:28.436 { 00:10:28.436 "name": "BaseBdev4", 00:10:28.436 "uuid": "bc509824-7a1c-41c8-9e0d-a83d9303dd53", 00:10:28.436 "is_configured": true, 00:10:28.436 "data_offset": 2048, 00:10:28.436 "data_size": 63488 00:10:28.436 } 00:10:28.436 ] 00:10:28.436 }' 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.436 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.695 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.695 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 [2024-09-29 21:41:47.584182] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.696 "name": "Existed_Raid", 00:10:28.696 "aliases": [ 00:10:28.696 "036a50e9-8d42-4593-81ac-733a93d067ef" 00:10:28.696 ], 00:10:28.696 "product_name": "Raid Volume", 00:10:28.696 "block_size": 512, 00:10:28.696 "num_blocks": 253952, 00:10:28.696 "uuid": "036a50e9-8d42-4593-81ac-733a93d067ef", 00:10:28.696 "assigned_rate_limits": { 00:10:28.696 "rw_ios_per_sec": 0, 00:10:28.696 "rw_mbytes_per_sec": 0, 00:10:28.696 "r_mbytes_per_sec": 0, 00:10:28.696 "w_mbytes_per_sec": 0 00:10:28.696 }, 00:10:28.696 "claimed": false, 00:10:28.696 "zoned": false, 00:10:28.696 "supported_io_types": { 00:10:28.696 "read": true, 00:10:28.696 "write": true, 00:10:28.696 "unmap": true, 00:10:28.696 "flush": true, 00:10:28.696 "reset": true, 00:10:28.696 "nvme_admin": false, 00:10:28.696 "nvme_io": false, 00:10:28.696 "nvme_io_md": false, 00:10:28.696 "write_zeroes": true, 00:10:28.696 "zcopy": false, 00:10:28.696 "get_zone_info": false, 00:10:28.696 "zone_management": false, 00:10:28.696 "zone_append": false, 00:10:28.696 "compare": false, 00:10:28.696 "compare_and_write": false, 00:10:28.696 "abort": false, 00:10:28.696 "seek_hole": false, 00:10:28.696 "seek_data": false, 00:10:28.696 "copy": false, 00:10:28.696 "nvme_iov_md": false 00:10:28.696 }, 00:10:28.696 "memory_domains": [ 00:10:28.696 { 00:10:28.696 "dma_device_id": "system", 00:10:28.696 "dma_device_type": 1 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.696 "dma_device_type": 2 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "dma_device_id": "system", 00:10:28.696 "dma_device_type": 1 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.696 "dma_device_type": 2 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "dma_device_id": "system", 00:10:28.696 "dma_device_type": 1 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.696 "dma_device_type": 2 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "dma_device_id": "system", 00:10:28.696 "dma_device_type": 1 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.696 "dma_device_type": 2 00:10:28.696 } 00:10:28.696 ], 00:10:28.696 "driver_specific": { 00:10:28.696 "raid": { 00:10:28.696 "uuid": "036a50e9-8d42-4593-81ac-733a93d067ef", 00:10:28.696 "strip_size_kb": 64, 00:10:28.696 "state": "online", 00:10:28.696 "raid_level": "raid0", 00:10:28.696 "superblock": true, 00:10:28.696 "num_base_bdevs": 4, 00:10:28.696 "num_base_bdevs_discovered": 4, 00:10:28.696 "num_base_bdevs_operational": 4, 00:10:28.696 "base_bdevs_list": [ 00:10:28.696 { 00:10:28.696 "name": "BaseBdev1", 00:10:28.696 "uuid": "cf374be8-29d0-4aa3-8b12-b8cc08663f44", 00:10:28.696 "is_configured": true, 00:10:28.696 "data_offset": 2048, 00:10:28.696 "data_size": 63488 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "name": "BaseBdev2", 00:10:28.696 "uuid": "080b55d2-3773-41ff-b2b6-52b0d83de814", 00:10:28.696 "is_configured": true, 00:10:28.696 "data_offset": 2048, 00:10:28.696 "data_size": 63488 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "name": "BaseBdev3", 00:10:28.696 "uuid": "27f52c43-8cc2-46c2-8347-d635ebd20ab5", 00:10:28.696 "is_configured": true, 00:10:28.696 "data_offset": 2048, 00:10:28.696 "data_size": 63488 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "name": "BaseBdev4", 00:10:28.696 "uuid": "bc509824-7a1c-41c8-9e0d-a83d9303dd53", 00:10:28.696 "is_configured": true, 00:10:28.696 "data_offset": 2048, 00:10:28.696 "data_size": 63488 00:10:28.696 } 00:10:28.696 ] 00:10:28.696 } 00:10:28.696 } 00:10:28.696 }' 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:28.696 BaseBdev2 00:10:28.696 BaseBdev3 00:10:28.696 BaseBdev4' 00:10:28.696 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.956 21:41:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.956 [2024-09-29 21:41:47.915312] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.956 [2024-09-29 21:41:47.915390] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:28.956 [2024-09-29 21:41:47.915446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.216 "name": "Existed_Raid", 00:10:29.216 "uuid": "036a50e9-8d42-4593-81ac-733a93d067ef", 00:10:29.216 "strip_size_kb": 64, 00:10:29.216 "state": "offline", 00:10:29.216 "raid_level": "raid0", 00:10:29.216 "superblock": true, 00:10:29.216 "num_base_bdevs": 4, 00:10:29.216 "num_base_bdevs_discovered": 3, 00:10:29.216 "num_base_bdevs_operational": 3, 00:10:29.216 "base_bdevs_list": [ 00:10:29.216 { 00:10:29.216 "name": null, 00:10:29.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.216 "is_configured": false, 00:10:29.216 "data_offset": 0, 00:10:29.216 "data_size": 63488 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "name": "BaseBdev2", 00:10:29.216 "uuid": "080b55d2-3773-41ff-b2b6-52b0d83de814", 00:10:29.216 "is_configured": true, 00:10:29.216 "data_offset": 2048, 00:10:29.216 "data_size": 63488 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "name": "BaseBdev3", 00:10:29.216 "uuid": "27f52c43-8cc2-46c2-8347-d635ebd20ab5", 00:10:29.216 "is_configured": true, 00:10:29.216 "data_offset": 2048, 00:10:29.216 "data_size": 63488 00:10:29.216 }, 00:10:29.216 { 00:10:29.216 "name": "BaseBdev4", 00:10:29.216 "uuid": "bc509824-7a1c-41c8-9e0d-a83d9303dd53", 00:10:29.216 "is_configured": true, 00:10:29.216 "data_offset": 2048, 00:10:29.216 "data_size": 63488 00:10:29.216 } 00:10:29.216 ] 00:10:29.216 }' 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.216 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.786 [2024-09-29 21:41:48.519474] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.786 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:29.787 [2024-09-29 21:41:48.676874] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.046 [2024-09-29 21:41:48.831623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:30.046 [2024-09-29 21:41:48.831676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.046 21:41:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.046 BaseBdev2 00:10:30.046 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.046 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:30.046 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:30.046 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.046 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:30.046 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.046 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.047 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.047 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.047 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.047 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.047 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.047 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.047 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.306 [ 00:10:30.306 { 00:10:30.306 "name": "BaseBdev2", 00:10:30.306 "aliases": [ 00:10:30.306 "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18" 00:10:30.306 ], 00:10:30.306 "product_name": "Malloc disk", 00:10:30.306 "block_size": 512, 00:10:30.306 "num_blocks": 65536, 00:10:30.306 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:30.306 "assigned_rate_limits": { 00:10:30.306 "rw_ios_per_sec": 0, 00:10:30.306 "rw_mbytes_per_sec": 0, 00:10:30.306 "r_mbytes_per_sec": 0, 00:10:30.306 "w_mbytes_per_sec": 0 00:10:30.306 }, 00:10:30.306 "claimed": false, 00:10:30.306 "zoned": false, 00:10:30.306 "supported_io_types": { 00:10:30.306 "read": true, 00:10:30.306 "write": true, 00:10:30.306 "unmap": true, 00:10:30.306 "flush": true, 00:10:30.306 "reset": true, 00:10:30.306 "nvme_admin": false, 00:10:30.306 "nvme_io": false, 00:10:30.306 "nvme_io_md": false, 00:10:30.306 "write_zeroes": true, 00:10:30.306 "zcopy": true, 00:10:30.306 "get_zone_info": false, 00:10:30.306 "zone_management": false, 00:10:30.306 "zone_append": false, 00:10:30.306 "compare": false, 00:10:30.306 "compare_and_write": false, 00:10:30.306 "abort": true, 00:10:30.306 "seek_hole": false, 00:10:30.306 "seek_data": false, 00:10:30.306 "copy": true, 00:10:30.306 "nvme_iov_md": false 00:10:30.306 }, 00:10:30.306 "memory_domains": [ 00:10:30.306 { 00:10:30.306 "dma_device_id": "system", 00:10:30.306 "dma_device_type": 1 00:10:30.306 }, 00:10:30.306 { 00:10:30.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.306 "dma_device_type": 2 00:10:30.306 } 00:10:30.306 ], 00:10:30.306 "driver_specific": {} 00:10:30.306 } 00:10:30.306 ] 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.306 BaseBdev3 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.306 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.306 [ 00:10:30.306 { 00:10:30.306 "name": "BaseBdev3", 00:10:30.306 "aliases": [ 00:10:30.306 "7816a0c2-42e7-47a5-8464-ee627dad0fec" 00:10:30.306 ], 00:10:30.306 "product_name": "Malloc disk", 00:10:30.306 "block_size": 512, 00:10:30.306 "num_blocks": 65536, 00:10:30.306 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:30.306 "assigned_rate_limits": { 00:10:30.306 "rw_ios_per_sec": 0, 00:10:30.306 "rw_mbytes_per_sec": 0, 00:10:30.306 "r_mbytes_per_sec": 0, 00:10:30.306 "w_mbytes_per_sec": 0 00:10:30.306 }, 00:10:30.306 "claimed": false, 00:10:30.306 "zoned": false, 00:10:30.306 "supported_io_types": { 00:10:30.306 "read": true, 00:10:30.306 "write": true, 00:10:30.306 "unmap": true, 00:10:30.306 "flush": true, 00:10:30.306 "reset": true, 00:10:30.306 "nvme_admin": false, 00:10:30.306 "nvme_io": false, 00:10:30.306 "nvme_io_md": false, 00:10:30.306 "write_zeroes": true, 00:10:30.306 "zcopy": true, 00:10:30.306 "get_zone_info": false, 00:10:30.306 "zone_management": false, 00:10:30.306 "zone_append": false, 00:10:30.306 "compare": false, 00:10:30.306 "compare_and_write": false, 00:10:30.306 "abort": true, 00:10:30.306 "seek_hole": false, 00:10:30.306 "seek_data": false, 00:10:30.306 "copy": true, 00:10:30.306 "nvme_iov_md": false 00:10:30.306 }, 00:10:30.306 "memory_domains": [ 00:10:30.306 { 00:10:30.306 "dma_device_id": "system", 00:10:30.306 "dma_device_type": 1 00:10:30.306 }, 00:10:30.306 { 00:10:30.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.306 "dma_device_type": 2 00:10:30.306 } 00:10:30.306 ], 00:10:30.306 "driver_specific": {} 00:10:30.307 } 00:10:30.307 ] 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.307 BaseBdev4 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.307 [ 00:10:30.307 { 00:10:30.307 "name": "BaseBdev4", 00:10:30.307 "aliases": [ 00:10:30.307 "99ca52f7-998a-4b35-8221-75e51d2b9dd2" 00:10:30.307 ], 00:10:30.307 "product_name": "Malloc disk", 00:10:30.307 "block_size": 512, 00:10:30.307 "num_blocks": 65536, 00:10:30.307 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:30.307 "assigned_rate_limits": { 00:10:30.307 "rw_ios_per_sec": 0, 00:10:30.307 "rw_mbytes_per_sec": 0, 00:10:30.307 "r_mbytes_per_sec": 0, 00:10:30.307 "w_mbytes_per_sec": 0 00:10:30.307 }, 00:10:30.307 "claimed": false, 00:10:30.307 "zoned": false, 00:10:30.307 "supported_io_types": { 00:10:30.307 "read": true, 00:10:30.307 "write": true, 00:10:30.307 "unmap": true, 00:10:30.307 "flush": true, 00:10:30.307 "reset": true, 00:10:30.307 "nvme_admin": false, 00:10:30.307 "nvme_io": false, 00:10:30.307 "nvme_io_md": false, 00:10:30.307 "write_zeroes": true, 00:10:30.307 "zcopy": true, 00:10:30.307 "get_zone_info": false, 00:10:30.307 "zone_management": false, 00:10:30.307 "zone_append": false, 00:10:30.307 "compare": false, 00:10:30.307 "compare_and_write": false, 00:10:30.307 "abort": true, 00:10:30.307 "seek_hole": false, 00:10:30.307 "seek_data": false, 00:10:30.307 "copy": true, 00:10:30.307 "nvme_iov_md": false 00:10:30.307 }, 00:10:30.307 "memory_domains": [ 00:10:30.307 { 00:10:30.307 "dma_device_id": "system", 00:10:30.307 "dma_device_type": 1 00:10:30.307 }, 00:10:30.307 { 00:10:30.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.307 "dma_device_type": 2 00:10:30.307 } 00:10:30.307 ], 00:10:30.307 "driver_specific": {} 00:10:30.307 } 00:10:30.307 ] 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.307 [2024-09-29 21:41:49.224487] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.307 [2024-09-29 21:41:49.224630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.307 [2024-09-29 21:41:49.224672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.307 [2024-09-29 21:41:49.226833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.307 [2024-09-29 21:41:49.226929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.307 "name": "Existed_Raid", 00:10:30.307 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:30.307 "strip_size_kb": 64, 00:10:30.307 "state": "configuring", 00:10:30.307 "raid_level": "raid0", 00:10:30.307 "superblock": true, 00:10:30.307 "num_base_bdevs": 4, 00:10:30.307 "num_base_bdevs_discovered": 3, 00:10:30.307 "num_base_bdevs_operational": 4, 00:10:30.307 "base_bdevs_list": [ 00:10:30.307 { 00:10:30.307 "name": "BaseBdev1", 00:10:30.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.307 "is_configured": false, 00:10:30.307 "data_offset": 0, 00:10:30.307 "data_size": 0 00:10:30.307 }, 00:10:30.307 { 00:10:30.307 "name": "BaseBdev2", 00:10:30.307 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:30.307 "is_configured": true, 00:10:30.307 "data_offset": 2048, 00:10:30.307 "data_size": 63488 00:10:30.307 }, 00:10:30.307 { 00:10:30.307 "name": "BaseBdev3", 00:10:30.307 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:30.307 "is_configured": true, 00:10:30.307 "data_offset": 2048, 00:10:30.307 "data_size": 63488 00:10:30.307 }, 00:10:30.307 { 00:10:30.307 "name": "BaseBdev4", 00:10:30.307 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:30.307 "is_configured": true, 00:10:30.307 "data_offset": 2048, 00:10:30.307 "data_size": 63488 00:10:30.307 } 00:10:30.307 ] 00:10:30.307 }' 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.307 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.875 [2024-09-29 21:41:49.663713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.875 "name": "Existed_Raid", 00:10:30.875 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:30.875 "strip_size_kb": 64, 00:10:30.875 "state": "configuring", 00:10:30.875 "raid_level": "raid0", 00:10:30.875 "superblock": true, 00:10:30.875 "num_base_bdevs": 4, 00:10:30.875 "num_base_bdevs_discovered": 2, 00:10:30.875 "num_base_bdevs_operational": 4, 00:10:30.875 "base_bdevs_list": [ 00:10:30.875 { 00:10:30.875 "name": "BaseBdev1", 00:10:30.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.875 "is_configured": false, 00:10:30.875 "data_offset": 0, 00:10:30.875 "data_size": 0 00:10:30.875 }, 00:10:30.875 { 00:10:30.875 "name": null, 00:10:30.875 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:30.875 "is_configured": false, 00:10:30.875 "data_offset": 0, 00:10:30.875 "data_size": 63488 00:10:30.875 }, 00:10:30.875 { 00:10:30.875 "name": "BaseBdev3", 00:10:30.875 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:30.875 "is_configured": true, 00:10:30.875 "data_offset": 2048, 00:10:30.875 "data_size": 63488 00:10:30.875 }, 00:10:30.875 { 00:10:30.875 "name": "BaseBdev4", 00:10:30.875 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:30.875 "is_configured": true, 00:10:30.875 "data_offset": 2048, 00:10:30.875 "data_size": 63488 00:10:30.875 } 00:10:30.875 ] 00:10:30.875 }' 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.875 21:41:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.135 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:31.135 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.135 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.135 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.395 [2024-09-29 21:41:50.177141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.395 BaseBdev1 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.395 [ 00:10:31.395 { 00:10:31.395 "name": "BaseBdev1", 00:10:31.395 "aliases": [ 00:10:31.395 "2e6d4ac6-9ccf-4547-9263-6d92b52d603b" 00:10:31.395 ], 00:10:31.395 "product_name": "Malloc disk", 00:10:31.395 "block_size": 512, 00:10:31.395 "num_blocks": 65536, 00:10:31.395 "uuid": "2e6d4ac6-9ccf-4547-9263-6d92b52d603b", 00:10:31.395 "assigned_rate_limits": { 00:10:31.395 "rw_ios_per_sec": 0, 00:10:31.395 "rw_mbytes_per_sec": 0, 00:10:31.395 "r_mbytes_per_sec": 0, 00:10:31.395 "w_mbytes_per_sec": 0 00:10:31.395 }, 00:10:31.395 "claimed": true, 00:10:31.395 "claim_type": "exclusive_write", 00:10:31.395 "zoned": false, 00:10:31.395 "supported_io_types": { 00:10:31.395 "read": true, 00:10:31.395 "write": true, 00:10:31.395 "unmap": true, 00:10:31.395 "flush": true, 00:10:31.395 "reset": true, 00:10:31.395 "nvme_admin": false, 00:10:31.395 "nvme_io": false, 00:10:31.395 "nvme_io_md": false, 00:10:31.395 "write_zeroes": true, 00:10:31.395 "zcopy": true, 00:10:31.395 "get_zone_info": false, 00:10:31.395 "zone_management": false, 00:10:31.395 "zone_append": false, 00:10:31.395 "compare": false, 00:10:31.395 "compare_and_write": false, 00:10:31.395 "abort": true, 00:10:31.395 "seek_hole": false, 00:10:31.395 "seek_data": false, 00:10:31.395 "copy": true, 00:10:31.395 "nvme_iov_md": false 00:10:31.395 }, 00:10:31.395 "memory_domains": [ 00:10:31.395 { 00:10:31.395 "dma_device_id": "system", 00:10:31.395 "dma_device_type": 1 00:10:31.395 }, 00:10:31.395 { 00:10:31.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.395 "dma_device_type": 2 00:10:31.395 } 00:10:31.395 ], 00:10:31.395 "driver_specific": {} 00:10:31.395 } 00:10:31.395 ] 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.395 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.396 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.396 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.396 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.396 "name": "Existed_Raid", 00:10:31.396 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:31.396 "strip_size_kb": 64, 00:10:31.396 "state": "configuring", 00:10:31.396 "raid_level": "raid0", 00:10:31.396 "superblock": true, 00:10:31.396 "num_base_bdevs": 4, 00:10:31.396 "num_base_bdevs_discovered": 3, 00:10:31.396 "num_base_bdevs_operational": 4, 00:10:31.396 "base_bdevs_list": [ 00:10:31.396 { 00:10:31.396 "name": "BaseBdev1", 00:10:31.396 "uuid": "2e6d4ac6-9ccf-4547-9263-6d92b52d603b", 00:10:31.396 "is_configured": true, 00:10:31.396 "data_offset": 2048, 00:10:31.396 "data_size": 63488 00:10:31.396 }, 00:10:31.396 { 00:10:31.396 "name": null, 00:10:31.396 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:31.396 "is_configured": false, 00:10:31.396 "data_offset": 0, 00:10:31.396 "data_size": 63488 00:10:31.396 }, 00:10:31.396 { 00:10:31.396 "name": "BaseBdev3", 00:10:31.396 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:31.396 "is_configured": true, 00:10:31.396 "data_offset": 2048, 00:10:31.396 "data_size": 63488 00:10:31.396 }, 00:10:31.396 { 00:10:31.396 "name": "BaseBdev4", 00:10:31.396 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:31.396 "is_configured": true, 00:10:31.396 "data_offset": 2048, 00:10:31.396 "data_size": 63488 00:10:31.396 } 00:10:31.396 ] 00:10:31.396 }' 00:10:31.396 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.396 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.656 [2024-09-29 21:41:50.616384] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.656 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.916 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.916 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.916 "name": "Existed_Raid", 00:10:31.916 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:31.916 "strip_size_kb": 64, 00:10:31.916 "state": "configuring", 00:10:31.916 "raid_level": "raid0", 00:10:31.916 "superblock": true, 00:10:31.916 "num_base_bdevs": 4, 00:10:31.916 "num_base_bdevs_discovered": 2, 00:10:31.916 "num_base_bdevs_operational": 4, 00:10:31.916 "base_bdevs_list": [ 00:10:31.916 { 00:10:31.916 "name": "BaseBdev1", 00:10:31.916 "uuid": "2e6d4ac6-9ccf-4547-9263-6d92b52d603b", 00:10:31.916 "is_configured": true, 00:10:31.916 "data_offset": 2048, 00:10:31.916 "data_size": 63488 00:10:31.916 }, 00:10:31.916 { 00:10:31.916 "name": null, 00:10:31.916 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:31.916 "is_configured": false, 00:10:31.916 "data_offset": 0, 00:10:31.916 "data_size": 63488 00:10:31.916 }, 00:10:31.916 { 00:10:31.916 "name": null, 00:10:31.916 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:31.916 "is_configured": false, 00:10:31.916 "data_offset": 0, 00:10:31.916 "data_size": 63488 00:10:31.916 }, 00:10:31.916 { 00:10:31.916 "name": "BaseBdev4", 00:10:31.916 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:31.916 "is_configured": true, 00:10:31.916 "data_offset": 2048, 00:10:31.916 "data_size": 63488 00:10:31.916 } 00:10:31.916 ] 00:10:31.916 }' 00:10:31.916 21:41:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.916 21:41:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.175 [2024-09-29 21:41:51.079607] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.175 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.176 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.176 "name": "Existed_Raid", 00:10:32.176 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:32.176 "strip_size_kb": 64, 00:10:32.176 "state": "configuring", 00:10:32.176 "raid_level": "raid0", 00:10:32.176 "superblock": true, 00:10:32.176 "num_base_bdevs": 4, 00:10:32.176 "num_base_bdevs_discovered": 3, 00:10:32.176 "num_base_bdevs_operational": 4, 00:10:32.176 "base_bdevs_list": [ 00:10:32.176 { 00:10:32.176 "name": "BaseBdev1", 00:10:32.176 "uuid": "2e6d4ac6-9ccf-4547-9263-6d92b52d603b", 00:10:32.176 "is_configured": true, 00:10:32.176 "data_offset": 2048, 00:10:32.176 "data_size": 63488 00:10:32.176 }, 00:10:32.176 { 00:10:32.176 "name": null, 00:10:32.176 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:32.176 "is_configured": false, 00:10:32.176 "data_offset": 0, 00:10:32.176 "data_size": 63488 00:10:32.176 }, 00:10:32.176 { 00:10:32.176 "name": "BaseBdev3", 00:10:32.176 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:32.176 "is_configured": true, 00:10:32.176 "data_offset": 2048, 00:10:32.176 "data_size": 63488 00:10:32.176 }, 00:10:32.176 { 00:10:32.176 "name": "BaseBdev4", 00:10:32.176 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:32.176 "is_configured": true, 00:10:32.176 "data_offset": 2048, 00:10:32.176 "data_size": 63488 00:10:32.176 } 00:10:32.176 ] 00:10:32.176 }' 00:10:32.176 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.176 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.745 [2024-09-29 21:41:51.534848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.745 "name": "Existed_Raid", 00:10:32.745 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:32.745 "strip_size_kb": 64, 00:10:32.745 "state": "configuring", 00:10:32.745 "raid_level": "raid0", 00:10:32.745 "superblock": true, 00:10:32.745 "num_base_bdevs": 4, 00:10:32.745 "num_base_bdevs_discovered": 2, 00:10:32.745 "num_base_bdevs_operational": 4, 00:10:32.745 "base_bdevs_list": [ 00:10:32.745 { 00:10:32.745 "name": null, 00:10:32.745 "uuid": "2e6d4ac6-9ccf-4547-9263-6d92b52d603b", 00:10:32.745 "is_configured": false, 00:10:32.745 "data_offset": 0, 00:10:32.745 "data_size": 63488 00:10:32.745 }, 00:10:32.745 { 00:10:32.745 "name": null, 00:10:32.745 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:32.745 "is_configured": false, 00:10:32.745 "data_offset": 0, 00:10:32.745 "data_size": 63488 00:10:32.745 }, 00:10:32.745 { 00:10:32.745 "name": "BaseBdev3", 00:10:32.745 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:32.745 "is_configured": true, 00:10:32.745 "data_offset": 2048, 00:10:32.745 "data_size": 63488 00:10:32.745 }, 00:10:32.745 { 00:10:32.745 "name": "BaseBdev4", 00:10:32.745 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:32.745 "is_configured": true, 00:10:32.745 "data_offset": 2048, 00:10:32.745 "data_size": 63488 00:10:32.745 } 00:10:32.745 ] 00:10:32.745 }' 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.745 21:41:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.313 [2024-09-29 21:41:52.102584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.313 "name": "Existed_Raid", 00:10:33.313 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:33.313 "strip_size_kb": 64, 00:10:33.313 "state": "configuring", 00:10:33.313 "raid_level": "raid0", 00:10:33.313 "superblock": true, 00:10:33.313 "num_base_bdevs": 4, 00:10:33.313 "num_base_bdevs_discovered": 3, 00:10:33.313 "num_base_bdevs_operational": 4, 00:10:33.313 "base_bdevs_list": [ 00:10:33.313 { 00:10:33.313 "name": null, 00:10:33.313 "uuid": "2e6d4ac6-9ccf-4547-9263-6d92b52d603b", 00:10:33.313 "is_configured": false, 00:10:33.313 "data_offset": 0, 00:10:33.313 "data_size": 63488 00:10:33.313 }, 00:10:33.313 { 00:10:33.313 "name": "BaseBdev2", 00:10:33.313 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:33.313 "is_configured": true, 00:10:33.313 "data_offset": 2048, 00:10:33.313 "data_size": 63488 00:10:33.313 }, 00:10:33.313 { 00:10:33.313 "name": "BaseBdev3", 00:10:33.313 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:33.313 "is_configured": true, 00:10:33.313 "data_offset": 2048, 00:10:33.313 "data_size": 63488 00:10:33.313 }, 00:10:33.313 { 00:10:33.313 "name": "BaseBdev4", 00:10:33.313 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:33.313 "is_configured": true, 00:10:33.313 "data_offset": 2048, 00:10:33.313 "data_size": 63488 00:10:33.313 } 00:10:33.313 ] 00:10:33.313 }' 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.313 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.572 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.572 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.572 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.572 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2e6d4ac6-9ccf-4547-9263-6d92b52d603b 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.832 [2024-09-29 21:41:52.679320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:33.832 [2024-09-29 21:41:52.679683] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:33.832 [2024-09-29 21:41:52.679736] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:33.832 [2024-09-29 21:41:52.680051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:33.832 NewBaseBdev 00:10:33.832 [2024-09-29 21:41:52.680244] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:33.832 [2024-09-29 21:41:52.680259] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:33.832 [2024-09-29 21:41:52.680393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.832 [ 00:10:33.832 { 00:10:33.832 "name": "NewBaseBdev", 00:10:33.832 "aliases": [ 00:10:33.832 "2e6d4ac6-9ccf-4547-9263-6d92b52d603b" 00:10:33.832 ], 00:10:33.832 "product_name": "Malloc disk", 00:10:33.832 "block_size": 512, 00:10:33.832 "num_blocks": 65536, 00:10:33.832 "uuid": "2e6d4ac6-9ccf-4547-9263-6d92b52d603b", 00:10:33.832 "assigned_rate_limits": { 00:10:33.832 "rw_ios_per_sec": 0, 00:10:33.832 "rw_mbytes_per_sec": 0, 00:10:33.832 "r_mbytes_per_sec": 0, 00:10:33.832 "w_mbytes_per_sec": 0 00:10:33.832 }, 00:10:33.832 "claimed": true, 00:10:33.832 "claim_type": "exclusive_write", 00:10:33.832 "zoned": false, 00:10:33.832 "supported_io_types": { 00:10:33.832 "read": true, 00:10:33.832 "write": true, 00:10:33.832 "unmap": true, 00:10:33.832 "flush": true, 00:10:33.832 "reset": true, 00:10:33.832 "nvme_admin": false, 00:10:33.832 "nvme_io": false, 00:10:33.832 "nvme_io_md": false, 00:10:33.832 "write_zeroes": true, 00:10:33.832 "zcopy": true, 00:10:33.832 "get_zone_info": false, 00:10:33.832 "zone_management": false, 00:10:33.832 "zone_append": false, 00:10:33.832 "compare": false, 00:10:33.832 "compare_and_write": false, 00:10:33.832 "abort": true, 00:10:33.832 "seek_hole": false, 00:10:33.832 "seek_data": false, 00:10:33.832 "copy": true, 00:10:33.832 "nvme_iov_md": false 00:10:33.832 }, 00:10:33.832 "memory_domains": [ 00:10:33.832 { 00:10:33.832 "dma_device_id": "system", 00:10:33.832 "dma_device_type": 1 00:10:33.832 }, 00:10:33.832 { 00:10:33.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.832 "dma_device_type": 2 00:10:33.832 } 00:10:33.832 ], 00:10:33.832 "driver_specific": {} 00:10:33.832 } 00:10:33.832 ] 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.832 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.833 "name": "Existed_Raid", 00:10:33.833 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:33.833 "strip_size_kb": 64, 00:10:33.833 "state": "online", 00:10:33.833 "raid_level": "raid0", 00:10:33.833 "superblock": true, 00:10:33.833 "num_base_bdevs": 4, 00:10:33.833 "num_base_bdevs_discovered": 4, 00:10:33.833 "num_base_bdevs_operational": 4, 00:10:33.833 "base_bdevs_list": [ 00:10:33.833 { 00:10:33.833 "name": "NewBaseBdev", 00:10:33.833 "uuid": "2e6d4ac6-9ccf-4547-9263-6d92b52d603b", 00:10:33.833 "is_configured": true, 00:10:33.833 "data_offset": 2048, 00:10:33.833 "data_size": 63488 00:10:33.833 }, 00:10:33.833 { 00:10:33.833 "name": "BaseBdev2", 00:10:33.833 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:33.833 "is_configured": true, 00:10:33.833 "data_offset": 2048, 00:10:33.833 "data_size": 63488 00:10:33.833 }, 00:10:33.833 { 00:10:33.833 "name": "BaseBdev3", 00:10:33.833 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:33.833 "is_configured": true, 00:10:33.833 "data_offset": 2048, 00:10:33.833 "data_size": 63488 00:10:33.833 }, 00:10:33.833 { 00:10:33.833 "name": "BaseBdev4", 00:10:33.833 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:33.833 "is_configured": true, 00:10:33.833 "data_offset": 2048, 00:10:33.833 "data_size": 63488 00:10:33.833 } 00:10:33.833 ] 00:10:33.833 }' 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.833 21:41:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.402 [2024-09-29 21:41:53.126872] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.402 "name": "Existed_Raid", 00:10:34.402 "aliases": [ 00:10:34.402 "e9df0d61-6523-4d88-9ae4-e3505149aea7" 00:10:34.402 ], 00:10:34.402 "product_name": "Raid Volume", 00:10:34.402 "block_size": 512, 00:10:34.402 "num_blocks": 253952, 00:10:34.402 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:34.402 "assigned_rate_limits": { 00:10:34.402 "rw_ios_per_sec": 0, 00:10:34.402 "rw_mbytes_per_sec": 0, 00:10:34.402 "r_mbytes_per_sec": 0, 00:10:34.402 "w_mbytes_per_sec": 0 00:10:34.402 }, 00:10:34.402 "claimed": false, 00:10:34.402 "zoned": false, 00:10:34.402 "supported_io_types": { 00:10:34.402 "read": true, 00:10:34.402 "write": true, 00:10:34.402 "unmap": true, 00:10:34.402 "flush": true, 00:10:34.402 "reset": true, 00:10:34.402 "nvme_admin": false, 00:10:34.402 "nvme_io": false, 00:10:34.402 "nvme_io_md": false, 00:10:34.402 "write_zeroes": true, 00:10:34.402 "zcopy": false, 00:10:34.402 "get_zone_info": false, 00:10:34.402 "zone_management": false, 00:10:34.402 "zone_append": false, 00:10:34.402 "compare": false, 00:10:34.402 "compare_and_write": false, 00:10:34.402 "abort": false, 00:10:34.402 "seek_hole": false, 00:10:34.402 "seek_data": false, 00:10:34.402 "copy": false, 00:10:34.402 "nvme_iov_md": false 00:10:34.402 }, 00:10:34.402 "memory_domains": [ 00:10:34.402 { 00:10:34.402 "dma_device_id": "system", 00:10:34.402 "dma_device_type": 1 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.402 "dma_device_type": 2 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "dma_device_id": "system", 00:10:34.402 "dma_device_type": 1 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.402 "dma_device_type": 2 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "dma_device_id": "system", 00:10:34.402 "dma_device_type": 1 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.402 "dma_device_type": 2 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "dma_device_id": "system", 00:10:34.402 "dma_device_type": 1 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.402 "dma_device_type": 2 00:10:34.402 } 00:10:34.402 ], 00:10:34.402 "driver_specific": { 00:10:34.402 "raid": { 00:10:34.402 "uuid": "e9df0d61-6523-4d88-9ae4-e3505149aea7", 00:10:34.402 "strip_size_kb": 64, 00:10:34.402 "state": "online", 00:10:34.402 "raid_level": "raid0", 00:10:34.402 "superblock": true, 00:10:34.402 "num_base_bdevs": 4, 00:10:34.402 "num_base_bdevs_discovered": 4, 00:10:34.402 "num_base_bdevs_operational": 4, 00:10:34.402 "base_bdevs_list": [ 00:10:34.402 { 00:10:34.402 "name": "NewBaseBdev", 00:10:34.402 "uuid": "2e6d4ac6-9ccf-4547-9263-6d92b52d603b", 00:10:34.402 "is_configured": true, 00:10:34.402 "data_offset": 2048, 00:10:34.402 "data_size": 63488 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "name": "BaseBdev2", 00:10:34.402 "uuid": "53b47e41-ecea-4ff4-a5c7-01c3e2bd1f18", 00:10:34.402 "is_configured": true, 00:10:34.402 "data_offset": 2048, 00:10:34.402 "data_size": 63488 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "name": "BaseBdev3", 00:10:34.402 "uuid": "7816a0c2-42e7-47a5-8464-ee627dad0fec", 00:10:34.402 "is_configured": true, 00:10:34.402 "data_offset": 2048, 00:10:34.402 "data_size": 63488 00:10:34.402 }, 00:10:34.402 { 00:10:34.402 "name": "BaseBdev4", 00:10:34.402 "uuid": "99ca52f7-998a-4b35-8221-75e51d2b9dd2", 00:10:34.402 "is_configured": true, 00:10:34.402 "data_offset": 2048, 00:10:34.402 "data_size": 63488 00:10:34.402 } 00:10:34.402 ] 00:10:34.402 } 00:10:34.402 } 00:10:34.402 }' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:34.402 BaseBdev2 00:10:34.402 BaseBdev3 00:10:34.402 BaseBdev4' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:34.402 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.403 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.662 [2024-09-29 21:41:53.414111] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.662 [2024-09-29 21:41:53.414189] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.662 [2024-09-29 21:41:53.414313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.662 [2024-09-29 21:41:53.414405] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.662 [2024-09-29 21:41:53.414453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70129 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70129 ']' 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70129 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70129 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70129' 00:10:34.662 killing process with pid 70129 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70129 00:10:34.662 [2024-09-29 21:41:53.454490] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:34.662 21:41:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70129 00:10:34.922 [2024-09-29 21:41:53.865318] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.304 ************************************ 00:10:36.304 END TEST raid_state_function_test_sb 00:10:36.304 ************************************ 00:10:36.304 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:36.304 00:10:36.304 real 0m11.464s 00:10:36.304 user 0m17.750s 00:10:36.304 sys 0m2.192s 00:10:36.304 21:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:36.304 21:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.304 21:41:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:36.304 21:41:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:36.304 21:41:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.304 21:41:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.304 ************************************ 00:10:36.304 START TEST raid_superblock_test 00:10:36.304 ************************************ 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70800 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70800 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70800 ']' 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:36.304 21:41:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.564 [2024-09-29 21:41:55.365559] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:36.564 [2024-09-29 21:41:55.365758] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70800 ] 00:10:36.564 [2024-09-29 21:41:55.533805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.830 [2024-09-29 21:41:55.781469] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.108 [2024-09-29 21:41:56.008926] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.108 [2024-09-29 21:41:56.008960] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.379 malloc1 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.379 [2024-09-29 21:41:56.242194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:37.379 [2024-09-29 21:41:56.242345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.379 [2024-09-29 21:41:56.242394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:37.379 [2024-09-29 21:41:56.242430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.379 [2024-09-29 21:41:56.244885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.379 [2024-09-29 21:41:56.244966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:37.379 pt1 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:37.379 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.380 malloc2 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.380 [2024-09-29 21:41:56.316384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:37.380 [2024-09-29 21:41:56.316444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.380 [2024-09-29 21:41:56.316470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:37.380 [2024-09-29 21:41:56.316481] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.380 [2024-09-29 21:41:56.318877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.380 [2024-09-29 21:41:56.318913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:37.380 pt2 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.380 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.640 malloc3 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.640 [2024-09-29 21:41:56.377728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:37.640 [2024-09-29 21:41:56.377847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.640 [2024-09-29 21:41:56.377888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:37.640 [2024-09-29 21:41:56.377914] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.640 [2024-09-29 21:41:56.380337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.640 [2024-09-29 21:41:56.380411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:37.640 pt3 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.640 malloc4 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.640 [2024-09-29 21:41:56.442853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:37.640 [2024-09-29 21:41:56.442957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:37.640 [2024-09-29 21:41:56.442995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:37.640 [2024-09-29 21:41:56.443028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:37.640 [2024-09-29 21:41:56.445438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:37.640 [2024-09-29 21:41:56.445506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:37.640 pt4 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.640 [2024-09-29 21:41:56.454895] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:37.640 [2024-09-29 21:41:56.456999] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:37.640 [2024-09-29 21:41:56.457134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:37.640 [2024-09-29 21:41:56.457202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:37.640 [2024-09-29 21:41:56.457387] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:37.640 [2024-09-29 21:41:56.457405] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:37.640 [2024-09-29 21:41:56.457659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:37.640 [2024-09-29 21:41:56.457822] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:37.640 [2024-09-29 21:41:56.457837] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:37.640 [2024-09-29 21:41:56.457969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.640 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.641 "name": "raid_bdev1", 00:10:37.641 "uuid": "52ca8422-2e38-4b73-a352-9bfbb032c4bc", 00:10:37.641 "strip_size_kb": 64, 00:10:37.641 "state": "online", 00:10:37.641 "raid_level": "raid0", 00:10:37.641 "superblock": true, 00:10:37.641 "num_base_bdevs": 4, 00:10:37.641 "num_base_bdevs_discovered": 4, 00:10:37.641 "num_base_bdevs_operational": 4, 00:10:37.641 "base_bdevs_list": [ 00:10:37.641 { 00:10:37.641 "name": "pt1", 00:10:37.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:37.641 "is_configured": true, 00:10:37.641 "data_offset": 2048, 00:10:37.641 "data_size": 63488 00:10:37.641 }, 00:10:37.641 { 00:10:37.641 "name": "pt2", 00:10:37.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:37.641 "is_configured": true, 00:10:37.641 "data_offset": 2048, 00:10:37.641 "data_size": 63488 00:10:37.641 }, 00:10:37.641 { 00:10:37.641 "name": "pt3", 00:10:37.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:37.641 "is_configured": true, 00:10:37.641 "data_offset": 2048, 00:10:37.641 "data_size": 63488 00:10:37.641 }, 00:10:37.641 { 00:10:37.641 "name": "pt4", 00:10:37.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:37.641 "is_configured": true, 00:10:37.641 "data_offset": 2048, 00:10:37.641 "data_size": 63488 00:10:37.641 } 00:10:37.641 ] 00:10:37.641 }' 00:10:37.641 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.641 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.901 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.160 [2024-09-29 21:41:56.886399] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.160 21:41:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.160 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.160 "name": "raid_bdev1", 00:10:38.160 "aliases": [ 00:10:38.160 "52ca8422-2e38-4b73-a352-9bfbb032c4bc" 00:10:38.160 ], 00:10:38.160 "product_name": "Raid Volume", 00:10:38.160 "block_size": 512, 00:10:38.160 "num_blocks": 253952, 00:10:38.160 "uuid": "52ca8422-2e38-4b73-a352-9bfbb032c4bc", 00:10:38.160 "assigned_rate_limits": { 00:10:38.160 "rw_ios_per_sec": 0, 00:10:38.160 "rw_mbytes_per_sec": 0, 00:10:38.160 "r_mbytes_per_sec": 0, 00:10:38.160 "w_mbytes_per_sec": 0 00:10:38.160 }, 00:10:38.160 "claimed": false, 00:10:38.160 "zoned": false, 00:10:38.160 "supported_io_types": { 00:10:38.160 "read": true, 00:10:38.160 "write": true, 00:10:38.160 "unmap": true, 00:10:38.160 "flush": true, 00:10:38.160 "reset": true, 00:10:38.160 "nvme_admin": false, 00:10:38.160 "nvme_io": false, 00:10:38.160 "nvme_io_md": false, 00:10:38.160 "write_zeroes": true, 00:10:38.160 "zcopy": false, 00:10:38.160 "get_zone_info": false, 00:10:38.160 "zone_management": false, 00:10:38.160 "zone_append": false, 00:10:38.160 "compare": false, 00:10:38.160 "compare_and_write": false, 00:10:38.160 "abort": false, 00:10:38.160 "seek_hole": false, 00:10:38.160 "seek_data": false, 00:10:38.160 "copy": false, 00:10:38.160 "nvme_iov_md": false 00:10:38.160 }, 00:10:38.160 "memory_domains": [ 00:10:38.160 { 00:10:38.160 "dma_device_id": "system", 00:10:38.160 "dma_device_type": 1 00:10:38.160 }, 00:10:38.160 { 00:10:38.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.160 "dma_device_type": 2 00:10:38.160 }, 00:10:38.160 { 00:10:38.160 "dma_device_id": "system", 00:10:38.160 "dma_device_type": 1 00:10:38.160 }, 00:10:38.160 { 00:10:38.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.160 "dma_device_type": 2 00:10:38.160 }, 00:10:38.161 { 00:10:38.161 "dma_device_id": "system", 00:10:38.161 "dma_device_type": 1 00:10:38.161 }, 00:10:38.161 { 00:10:38.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.161 "dma_device_type": 2 00:10:38.161 }, 00:10:38.161 { 00:10:38.161 "dma_device_id": "system", 00:10:38.161 "dma_device_type": 1 00:10:38.161 }, 00:10:38.161 { 00:10:38.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.161 "dma_device_type": 2 00:10:38.161 } 00:10:38.161 ], 00:10:38.161 "driver_specific": { 00:10:38.161 "raid": { 00:10:38.161 "uuid": "52ca8422-2e38-4b73-a352-9bfbb032c4bc", 00:10:38.161 "strip_size_kb": 64, 00:10:38.161 "state": "online", 00:10:38.161 "raid_level": "raid0", 00:10:38.161 "superblock": true, 00:10:38.161 "num_base_bdevs": 4, 00:10:38.161 "num_base_bdevs_discovered": 4, 00:10:38.161 "num_base_bdevs_operational": 4, 00:10:38.161 "base_bdevs_list": [ 00:10:38.161 { 00:10:38.161 "name": "pt1", 00:10:38.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.161 "is_configured": true, 00:10:38.161 "data_offset": 2048, 00:10:38.161 "data_size": 63488 00:10:38.161 }, 00:10:38.161 { 00:10:38.161 "name": "pt2", 00:10:38.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.161 "is_configured": true, 00:10:38.161 "data_offset": 2048, 00:10:38.161 "data_size": 63488 00:10:38.161 }, 00:10:38.161 { 00:10:38.161 "name": "pt3", 00:10:38.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.161 "is_configured": true, 00:10:38.161 "data_offset": 2048, 00:10:38.161 "data_size": 63488 00:10:38.161 }, 00:10:38.161 { 00:10:38.161 "name": "pt4", 00:10:38.161 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.161 "is_configured": true, 00:10:38.161 "data_offset": 2048, 00:10:38.161 "data_size": 63488 00:10:38.161 } 00:10:38.161 ] 00:10:38.161 } 00:10:38.161 } 00:10:38.161 }' 00:10:38.161 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.161 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:38.161 pt2 00:10:38.161 pt3 00:10:38.161 pt4' 00:10:38.161 21:41:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.161 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 [2024-09-29 21:41:57.213765] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=52ca8422-2e38-4b73-a352-9bfbb032c4bc 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 52ca8422-2e38-4b73-a352-9bfbb032c4bc ']' 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 [2024-09-29 21:41:57.253429] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.421 [2024-09-29 21:41:57.253456] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.421 [2024-09-29 21:41:57.253526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.421 [2024-09-29 21:41:57.253602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.421 [2024-09-29 21:41:57.253616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:38.421 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.686 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.686 [2024-09-29 21:41:57.421171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:38.686 [2024-09-29 21:41:57.423173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:38.686 [2024-09-29 21:41:57.423213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:38.686 [2024-09-29 21:41:57.423244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:38.686 [2024-09-29 21:41:57.423292] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:38.686 [2024-09-29 21:41:57.423357] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:38.686 [2024-09-29 21:41:57.423376] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:38.687 [2024-09-29 21:41:57.423394] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:38.687 [2024-09-29 21:41:57.423406] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.687 [2024-09-29 21:41:57.423417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:38.687 request: 00:10:38.687 { 00:10:38.687 "name": "raid_bdev1", 00:10:38.687 "raid_level": "raid0", 00:10:38.687 "base_bdevs": [ 00:10:38.687 "malloc1", 00:10:38.687 "malloc2", 00:10:38.687 "malloc3", 00:10:38.687 "malloc4" 00:10:38.687 ], 00:10:38.687 "strip_size_kb": 64, 00:10:38.687 "superblock": false, 00:10:38.687 "method": "bdev_raid_create", 00:10:38.687 "req_id": 1 00:10:38.687 } 00:10:38.687 Got JSON-RPC error response 00:10:38.687 response: 00:10:38.687 { 00:10:38.687 "code": -17, 00:10:38.687 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:38.687 } 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.687 [2024-09-29 21:41:57.489062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:38.687 [2024-09-29 21:41:57.489152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:38.687 [2024-09-29 21:41:57.489187] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:38.687 [2024-09-29 21:41:57.489215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:38.687 [2024-09-29 21:41:57.491595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:38.687 [2024-09-29 21:41:57.491686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:38.687 [2024-09-29 21:41:57.491776] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:38.687 [2024-09-29 21:41:57.491860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:38.687 pt1 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.687 "name": "raid_bdev1", 00:10:38.687 "uuid": "52ca8422-2e38-4b73-a352-9bfbb032c4bc", 00:10:38.687 "strip_size_kb": 64, 00:10:38.687 "state": "configuring", 00:10:38.687 "raid_level": "raid0", 00:10:38.687 "superblock": true, 00:10:38.687 "num_base_bdevs": 4, 00:10:38.687 "num_base_bdevs_discovered": 1, 00:10:38.687 "num_base_bdevs_operational": 4, 00:10:38.687 "base_bdevs_list": [ 00:10:38.687 { 00:10:38.687 "name": "pt1", 00:10:38.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:38.687 "is_configured": true, 00:10:38.687 "data_offset": 2048, 00:10:38.687 "data_size": 63488 00:10:38.687 }, 00:10:38.687 { 00:10:38.687 "name": null, 00:10:38.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:38.687 "is_configured": false, 00:10:38.687 "data_offset": 2048, 00:10:38.687 "data_size": 63488 00:10:38.687 }, 00:10:38.687 { 00:10:38.687 "name": null, 00:10:38.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:38.687 "is_configured": false, 00:10:38.687 "data_offset": 2048, 00:10:38.687 "data_size": 63488 00:10:38.687 }, 00:10:38.687 { 00:10:38.687 "name": null, 00:10:38.687 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:38.687 "is_configured": false, 00:10:38.687 "data_offset": 2048, 00:10:38.687 "data_size": 63488 00:10:38.687 } 00:10:38.687 ] 00:10:38.687 }' 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.687 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.256 [2024-09-29 21:41:57.944274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.256 [2024-09-29 21:41:57.944323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.256 [2024-09-29 21:41:57.944341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:39.256 [2024-09-29 21:41:57.944351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.256 [2024-09-29 21:41:57.944751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.256 [2024-09-29 21:41:57.944771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.256 [2024-09-29 21:41:57.944829] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.256 [2024-09-29 21:41:57.944850] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.256 pt2 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.256 [2024-09-29 21:41:57.956278] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.256 21:41:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.256 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.256 "name": "raid_bdev1", 00:10:39.256 "uuid": "52ca8422-2e38-4b73-a352-9bfbb032c4bc", 00:10:39.256 "strip_size_kb": 64, 00:10:39.256 "state": "configuring", 00:10:39.256 "raid_level": "raid0", 00:10:39.256 "superblock": true, 00:10:39.256 "num_base_bdevs": 4, 00:10:39.256 "num_base_bdevs_discovered": 1, 00:10:39.256 "num_base_bdevs_operational": 4, 00:10:39.256 "base_bdevs_list": [ 00:10:39.256 { 00:10:39.256 "name": "pt1", 00:10:39.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.256 "is_configured": true, 00:10:39.256 "data_offset": 2048, 00:10:39.256 "data_size": 63488 00:10:39.256 }, 00:10:39.256 { 00:10:39.256 "name": null, 00:10:39.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.256 "is_configured": false, 00:10:39.256 "data_offset": 0, 00:10:39.256 "data_size": 63488 00:10:39.256 }, 00:10:39.256 { 00:10:39.256 "name": null, 00:10:39.256 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.256 "is_configured": false, 00:10:39.256 "data_offset": 2048, 00:10:39.256 "data_size": 63488 00:10:39.256 }, 00:10:39.256 { 00:10:39.256 "name": null, 00:10:39.256 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.256 "is_configured": false, 00:10:39.256 "data_offset": 2048, 00:10:39.256 "data_size": 63488 00:10:39.256 } 00:10:39.256 ] 00:10:39.256 }' 00:10:39.256 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.256 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.516 [2024-09-29 21:41:58.391559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:39.516 [2024-09-29 21:41:58.391651] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.516 [2024-09-29 21:41:58.391686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:39.516 [2024-09-29 21:41:58.391711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.516 [2024-09-29 21:41:58.392181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.516 [2024-09-29 21:41:58.392258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:39.516 [2024-09-29 21:41:58.392354] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:39.516 [2024-09-29 21:41:58.392413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:39.516 pt2 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.516 [2024-09-29 21:41:58.403535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:39.516 [2024-09-29 21:41:58.403632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.516 [2024-09-29 21:41:58.403672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:39.516 [2024-09-29 21:41:58.403713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.516 [2024-09-29 21:41:58.404112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.516 [2024-09-29 21:41:58.404131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:39.516 [2024-09-29 21:41:58.404196] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:39.516 [2024-09-29 21:41:58.404212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:39.516 pt3 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.516 [2024-09-29 21:41:58.411503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:39.516 [2024-09-29 21:41:58.411550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:39.516 [2024-09-29 21:41:58.411568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:39.516 [2024-09-29 21:41:58.411576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:39.516 [2024-09-29 21:41:58.411940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:39.516 [2024-09-29 21:41:58.411956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:39.516 [2024-09-29 21:41:58.412011] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:39.516 [2024-09-29 21:41:58.412033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:39.516 [2024-09-29 21:41:58.412185] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.516 [2024-09-29 21:41:58.412195] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:39.516 [2024-09-29 21:41:58.412450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:39.516 [2024-09-29 21:41:58.412621] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.516 [2024-09-29 21:41:58.412634] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:39.516 [2024-09-29 21:41:58.412751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.516 pt4 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.516 "name": "raid_bdev1", 00:10:39.516 "uuid": "52ca8422-2e38-4b73-a352-9bfbb032c4bc", 00:10:39.516 "strip_size_kb": 64, 00:10:39.516 "state": "online", 00:10:39.516 "raid_level": "raid0", 00:10:39.516 "superblock": true, 00:10:39.516 "num_base_bdevs": 4, 00:10:39.516 "num_base_bdevs_discovered": 4, 00:10:39.516 "num_base_bdevs_operational": 4, 00:10:39.516 "base_bdevs_list": [ 00:10:39.516 { 00:10:39.516 "name": "pt1", 00:10:39.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:39.516 "is_configured": true, 00:10:39.516 "data_offset": 2048, 00:10:39.516 "data_size": 63488 00:10:39.516 }, 00:10:39.516 { 00:10:39.516 "name": "pt2", 00:10:39.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:39.516 "is_configured": true, 00:10:39.516 "data_offset": 2048, 00:10:39.516 "data_size": 63488 00:10:39.516 }, 00:10:39.516 { 00:10:39.516 "name": "pt3", 00:10:39.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:39.516 "is_configured": true, 00:10:39.516 "data_offset": 2048, 00:10:39.516 "data_size": 63488 00:10:39.516 }, 00:10:39.516 { 00:10:39.516 "name": "pt4", 00:10:39.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:39.516 "is_configured": true, 00:10:39.516 "data_offset": 2048, 00:10:39.516 "data_size": 63488 00:10:39.516 } 00:10:39.516 ] 00:10:39.516 }' 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.516 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.083 [2024-09-29 21:41:58.835083] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.083 "name": "raid_bdev1", 00:10:40.083 "aliases": [ 00:10:40.083 "52ca8422-2e38-4b73-a352-9bfbb032c4bc" 00:10:40.083 ], 00:10:40.083 "product_name": "Raid Volume", 00:10:40.083 "block_size": 512, 00:10:40.083 "num_blocks": 253952, 00:10:40.083 "uuid": "52ca8422-2e38-4b73-a352-9bfbb032c4bc", 00:10:40.083 "assigned_rate_limits": { 00:10:40.083 "rw_ios_per_sec": 0, 00:10:40.083 "rw_mbytes_per_sec": 0, 00:10:40.083 "r_mbytes_per_sec": 0, 00:10:40.083 "w_mbytes_per_sec": 0 00:10:40.083 }, 00:10:40.083 "claimed": false, 00:10:40.083 "zoned": false, 00:10:40.083 "supported_io_types": { 00:10:40.083 "read": true, 00:10:40.083 "write": true, 00:10:40.083 "unmap": true, 00:10:40.083 "flush": true, 00:10:40.083 "reset": true, 00:10:40.083 "nvme_admin": false, 00:10:40.083 "nvme_io": false, 00:10:40.083 "nvme_io_md": false, 00:10:40.083 "write_zeroes": true, 00:10:40.083 "zcopy": false, 00:10:40.083 "get_zone_info": false, 00:10:40.083 "zone_management": false, 00:10:40.083 "zone_append": false, 00:10:40.083 "compare": false, 00:10:40.083 "compare_and_write": false, 00:10:40.083 "abort": false, 00:10:40.083 "seek_hole": false, 00:10:40.083 "seek_data": false, 00:10:40.083 "copy": false, 00:10:40.083 "nvme_iov_md": false 00:10:40.083 }, 00:10:40.083 "memory_domains": [ 00:10:40.083 { 00:10:40.083 "dma_device_id": "system", 00:10:40.083 "dma_device_type": 1 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.083 "dma_device_type": 2 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "dma_device_id": "system", 00:10:40.083 "dma_device_type": 1 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.083 "dma_device_type": 2 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "dma_device_id": "system", 00:10:40.083 "dma_device_type": 1 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.083 "dma_device_type": 2 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "dma_device_id": "system", 00:10:40.083 "dma_device_type": 1 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.083 "dma_device_type": 2 00:10:40.083 } 00:10:40.083 ], 00:10:40.083 "driver_specific": { 00:10:40.083 "raid": { 00:10:40.083 "uuid": "52ca8422-2e38-4b73-a352-9bfbb032c4bc", 00:10:40.083 "strip_size_kb": 64, 00:10:40.083 "state": "online", 00:10:40.083 "raid_level": "raid0", 00:10:40.083 "superblock": true, 00:10:40.083 "num_base_bdevs": 4, 00:10:40.083 "num_base_bdevs_discovered": 4, 00:10:40.083 "num_base_bdevs_operational": 4, 00:10:40.083 "base_bdevs_list": [ 00:10:40.083 { 00:10:40.083 "name": "pt1", 00:10:40.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:40.083 "is_configured": true, 00:10:40.083 "data_offset": 2048, 00:10:40.083 "data_size": 63488 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "name": "pt2", 00:10:40.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:40.083 "is_configured": true, 00:10:40.083 "data_offset": 2048, 00:10:40.083 "data_size": 63488 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "name": "pt3", 00:10:40.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:40.083 "is_configured": true, 00:10:40.083 "data_offset": 2048, 00:10:40.083 "data_size": 63488 00:10:40.083 }, 00:10:40.083 { 00:10:40.083 "name": "pt4", 00:10:40.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:40.083 "is_configured": true, 00:10:40.083 "data_offset": 2048, 00:10:40.083 "data_size": 63488 00:10:40.083 } 00:10:40.083 ] 00:10:40.083 } 00:10:40.083 } 00:10:40.083 }' 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:40.083 pt2 00:10:40.083 pt3 00:10:40.083 pt4' 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.083 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.084 21:41:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.084 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.084 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.084 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.084 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.084 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:40.084 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.084 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.084 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.341 [2024-09-29 21:41:59.162430] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 52ca8422-2e38-4b73-a352-9bfbb032c4bc '!=' 52ca8422-2e38-4b73-a352-9bfbb032c4bc ']' 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70800 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70800 ']' 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70800 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70800 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70800' 00:10:40.341 killing process with pid 70800 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70800 00:10:40.341 [2024-09-29 21:41:59.235240] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.341 21:41:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70800 00:10:40.341 [2024-09-29 21:41:59.235365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.341 [2024-09-29 21:41:59.235439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.341 [2024-09-29 21:41:59.235492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:40.910 [2024-09-29 21:41:59.650662] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.289 21:42:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:42.289 ************************************ 00:10:42.289 END TEST raid_superblock_test 00:10:42.289 ************************************ 00:10:42.289 00:10:42.289 real 0m5.697s 00:10:42.289 user 0m7.900s 00:10:42.289 sys 0m1.095s 00:10:42.289 21:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.289 21:42:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 21:42:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:42.289 21:42:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:42.289 21:42:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.289 21:42:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 ************************************ 00:10:42.289 START TEST raid_read_error_test 00:10:42.289 ************************************ 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LDSQlZn4AM 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71067 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71067 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71067 ']' 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.289 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.289 [2024-09-29 21:42:01.155609] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:42.289 [2024-09-29 21:42:01.155728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:10:42.548 [2024-09-29 21:42:01.324661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.807 [2024-09-29 21:42:01.572026] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.066 [2024-09-29 21:42:01.793270] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.066 [2024-09-29 21:42:01.793307] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.066 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.066 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:43.066 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.066 21:42:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:43.066 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.066 21:42:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.066 BaseBdev1_malloc 00:10:43.066 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.066 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:43.066 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.066 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.066 true 00:10:43.066 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.066 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:43.066 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.066 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.066 [2024-09-29 21:42:02.044191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:43.066 [2024-09-29 21:42:02.044260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.066 [2024-09-29 21:42:02.044279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:43.066 [2024-09-29 21:42:02.044291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.066 [2024-09-29 21:42:02.046676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.066 [2024-09-29 21:42:02.046820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:43.326 BaseBdev1 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 BaseBdev2_malloc 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 true 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 [2024-09-29 21:42:02.131533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:43.326 [2024-09-29 21:42:02.131600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.326 [2024-09-29 21:42:02.131616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:43.326 [2024-09-29 21:42:02.131628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.326 [2024-09-29 21:42:02.134115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.326 [2024-09-29 21:42:02.134151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:43.326 BaseBdev2 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 BaseBdev3_malloc 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 true 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 [2024-09-29 21:42:02.203566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:43.326 [2024-09-29 21:42:02.203619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.326 [2024-09-29 21:42:02.203635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:43.326 [2024-09-29 21:42:02.203646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.326 [2024-09-29 21:42:02.206135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.326 [2024-09-29 21:42:02.206172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:43.326 BaseBdev3 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 BaseBdev4_malloc 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 true 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 [2024-09-29 21:42:02.275373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:43.326 [2024-09-29 21:42:02.275426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.326 [2024-09-29 21:42:02.275443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:43.326 [2024-09-29 21:42:02.275454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.326 [2024-09-29 21:42:02.277862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.326 [2024-09-29 21:42:02.277902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:43.326 BaseBdev4 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.326 [2024-09-29 21:42:02.287433] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.326 [2024-09-29 21:42:02.289510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:43.326 [2024-09-29 21:42:02.289597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:43.326 [2024-09-29 21:42:02.289654] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:43.326 [2024-09-29 21:42:02.289875] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:43.326 [2024-09-29 21:42:02.289889] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:43.326 [2024-09-29 21:42:02.290130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:43.326 [2024-09-29 21:42:02.290278] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:43.326 [2024-09-29 21:42:02.290287] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:43.326 [2024-09-29 21:42:02.290440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.326 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.327 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.586 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.586 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.586 "name": "raid_bdev1", 00:10:43.586 "uuid": "8985fa5d-2e45-4dab-ad4c-cd0cfd07b37e", 00:10:43.586 "strip_size_kb": 64, 00:10:43.586 "state": "online", 00:10:43.586 "raid_level": "raid0", 00:10:43.586 "superblock": true, 00:10:43.586 "num_base_bdevs": 4, 00:10:43.586 "num_base_bdevs_discovered": 4, 00:10:43.586 "num_base_bdevs_operational": 4, 00:10:43.586 "base_bdevs_list": [ 00:10:43.586 { 00:10:43.586 "name": "BaseBdev1", 00:10:43.586 "uuid": "7d9d3d5c-1da5-5612-8068-1032466b3d6b", 00:10:43.586 "is_configured": true, 00:10:43.586 "data_offset": 2048, 00:10:43.586 "data_size": 63488 00:10:43.586 }, 00:10:43.586 { 00:10:43.586 "name": "BaseBdev2", 00:10:43.586 "uuid": "e5920c92-1821-5964-9381-93ab382210b5", 00:10:43.586 "is_configured": true, 00:10:43.586 "data_offset": 2048, 00:10:43.586 "data_size": 63488 00:10:43.586 }, 00:10:43.586 { 00:10:43.586 "name": "BaseBdev3", 00:10:43.586 "uuid": "8cec2fba-628e-58a8-ac80-2ae700c22485", 00:10:43.586 "is_configured": true, 00:10:43.586 "data_offset": 2048, 00:10:43.586 "data_size": 63488 00:10:43.586 }, 00:10:43.586 { 00:10:43.586 "name": "BaseBdev4", 00:10:43.586 "uuid": "3bf51f65-385f-554c-89ba-46e92cf2f466", 00:10:43.586 "is_configured": true, 00:10:43.586 "data_offset": 2048, 00:10:43.586 "data_size": 63488 00:10:43.586 } 00:10:43.586 ] 00:10:43.586 }' 00:10:43.586 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.586 21:42:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.844 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:43.844 21:42:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:44.103 [2024-09-29 21:42:02.851920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.040 21:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.041 21:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.041 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.041 21:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.041 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.041 "name": "raid_bdev1", 00:10:45.041 "uuid": "8985fa5d-2e45-4dab-ad4c-cd0cfd07b37e", 00:10:45.041 "strip_size_kb": 64, 00:10:45.041 "state": "online", 00:10:45.041 "raid_level": "raid0", 00:10:45.041 "superblock": true, 00:10:45.041 "num_base_bdevs": 4, 00:10:45.041 "num_base_bdevs_discovered": 4, 00:10:45.041 "num_base_bdevs_operational": 4, 00:10:45.041 "base_bdevs_list": [ 00:10:45.041 { 00:10:45.041 "name": "BaseBdev1", 00:10:45.041 "uuid": "7d9d3d5c-1da5-5612-8068-1032466b3d6b", 00:10:45.041 "is_configured": true, 00:10:45.041 "data_offset": 2048, 00:10:45.041 "data_size": 63488 00:10:45.041 }, 00:10:45.041 { 00:10:45.041 "name": "BaseBdev2", 00:10:45.041 "uuid": "e5920c92-1821-5964-9381-93ab382210b5", 00:10:45.041 "is_configured": true, 00:10:45.041 "data_offset": 2048, 00:10:45.041 "data_size": 63488 00:10:45.041 }, 00:10:45.041 { 00:10:45.041 "name": "BaseBdev3", 00:10:45.041 "uuid": "8cec2fba-628e-58a8-ac80-2ae700c22485", 00:10:45.041 "is_configured": true, 00:10:45.041 "data_offset": 2048, 00:10:45.041 "data_size": 63488 00:10:45.041 }, 00:10:45.041 { 00:10:45.041 "name": "BaseBdev4", 00:10:45.041 "uuid": "3bf51f65-385f-554c-89ba-46e92cf2f466", 00:10:45.041 "is_configured": true, 00:10:45.041 "data_offset": 2048, 00:10:45.041 "data_size": 63488 00:10:45.041 } 00:10:45.041 ] 00:10:45.041 }' 00:10:45.041 21:42:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.041 21:42:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.300 [2024-09-29 21:42:04.208477] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:45.300 [2024-09-29 21:42:04.208606] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.300 [2024-09-29 21:42:04.211224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.300 [2024-09-29 21:42:04.211346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.300 [2024-09-29 21:42:04.211415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:45.300 [2024-09-29 21:42:04.211470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:45.300 { 00:10:45.300 "results": [ 00:10:45.300 { 00:10:45.300 "job": "raid_bdev1", 00:10:45.300 "core_mask": "0x1", 00:10:45.300 "workload": "randrw", 00:10:45.300 "percentage": 50, 00:10:45.300 "status": "finished", 00:10:45.300 "queue_depth": 1, 00:10:45.300 "io_size": 131072, 00:10:45.300 "runtime": 1.357165, 00:10:45.300 "iops": 14301.135086743321, 00:10:45.300 "mibps": 1787.6418858429151, 00:10:45.300 "io_failed": 1, 00:10:45.300 "io_timeout": 0, 00:10:45.300 "avg_latency_us": 98.71115892631764, 00:10:45.300 "min_latency_us": 24.258515283842794, 00:10:45.300 "max_latency_us": 1359.3711790393013 00:10:45.300 } 00:10:45.300 ], 00:10:45.300 "core_count": 1 00:10:45.300 } 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71067 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71067 ']' 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71067 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71067 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71067' 00:10:45.300 killing process with pid 71067 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71067 00:10:45.300 [2024-09-29 21:42:04.249342] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:45.300 21:42:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71067 00:10:45.866 [2024-09-29 21:42:04.589723] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LDSQlZn4AM 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:47.246 ************************************ 00:10:47.246 END TEST raid_read_error_test 00:10:47.246 ************************************ 00:10:47.246 00:10:47.246 real 0m4.943s 00:10:47.246 user 0m5.634s 00:10:47.246 sys 0m0.710s 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.246 21:42:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.246 21:42:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:47.246 21:42:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:47.246 21:42:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.246 21:42:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.246 ************************************ 00:10:47.246 START TEST raid_write_error_test 00:10:47.246 ************************************ 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wyqR8Ps3MP 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71218 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71218 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71218 ']' 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.246 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.247 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.247 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.247 [2024-09-29 21:42:06.173959] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:47.247 [2024-09-29 21:42:06.174100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71218 ] 00:10:47.506 [2024-09-29 21:42:06.338704] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.766 [2024-09-29 21:42:06.577982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.025 [2024-09-29 21:42:06.805061] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.025 [2024-09-29 21:42:06.805096] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.025 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.025 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:48.025 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.025 21:42:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:48.025 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.025 21:42:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.284 BaseBdev1_malloc 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.285 true 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.285 [2024-09-29 21:42:07.049332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:48.285 [2024-09-29 21:42:07.049470] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.285 [2024-09-29 21:42:07.049493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:48.285 [2024-09-29 21:42:07.049504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.285 [2024-09-29 21:42:07.051860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.285 [2024-09-29 21:42:07.051900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:48.285 BaseBdev1 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.285 BaseBdev2_malloc 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.285 true 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.285 [2024-09-29 21:42:07.148673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:48.285 [2024-09-29 21:42:07.148733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.285 [2024-09-29 21:42:07.148750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:48.285 [2024-09-29 21:42:07.148761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.285 [2024-09-29 21:42:07.151126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.285 [2024-09-29 21:42:07.151234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:48.285 BaseBdev2 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.285 BaseBdev3_malloc 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.285 true 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.285 [2024-09-29 21:42:07.221230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:48.285 [2024-09-29 21:42:07.221280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.285 [2024-09-29 21:42:07.221298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:48.285 [2024-09-29 21:42:07.221309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.285 [2024-09-29 21:42:07.223682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.285 [2024-09-29 21:42:07.223733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:48.285 BaseBdev3 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.285 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.551 BaseBdev4_malloc 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.551 true 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.551 [2024-09-29 21:42:07.295294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:48.551 [2024-09-29 21:42:07.295350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.551 [2024-09-29 21:42:07.295366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:48.551 [2024-09-29 21:42:07.295376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.551 [2024-09-29 21:42:07.297716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.551 [2024-09-29 21:42:07.297759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:48.551 BaseBdev4 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.551 [2024-09-29 21:42:07.307356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.551 [2024-09-29 21:42:07.309479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.551 [2024-09-29 21:42:07.309551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.551 [2024-09-29 21:42:07.309604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:48.551 [2024-09-29 21:42:07.309808] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:48.551 [2024-09-29 21:42:07.309822] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.551 [2024-09-29 21:42:07.310067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:48.551 [2024-09-29 21:42:07.310225] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:48.551 [2024-09-29 21:42:07.310234] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:48.551 [2024-09-29 21:42:07.310395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.551 "name": "raid_bdev1", 00:10:48.551 "uuid": "b88ac0f7-322e-4890-8a6f-4991094ccaed", 00:10:48.551 "strip_size_kb": 64, 00:10:48.551 "state": "online", 00:10:48.551 "raid_level": "raid0", 00:10:48.551 "superblock": true, 00:10:48.551 "num_base_bdevs": 4, 00:10:48.551 "num_base_bdevs_discovered": 4, 00:10:48.551 "num_base_bdevs_operational": 4, 00:10:48.551 "base_bdevs_list": [ 00:10:48.551 { 00:10:48.551 "name": "BaseBdev1", 00:10:48.551 "uuid": "d72636a8-2bd8-512f-9101-c17494254685", 00:10:48.551 "is_configured": true, 00:10:48.551 "data_offset": 2048, 00:10:48.551 "data_size": 63488 00:10:48.551 }, 00:10:48.551 { 00:10:48.551 "name": "BaseBdev2", 00:10:48.551 "uuid": "10ae04ab-a44a-54ae-b0d7-6608d4c72a25", 00:10:48.551 "is_configured": true, 00:10:48.551 "data_offset": 2048, 00:10:48.551 "data_size": 63488 00:10:48.551 }, 00:10:48.551 { 00:10:48.551 "name": "BaseBdev3", 00:10:48.551 "uuid": "aa938510-806f-507c-abda-d3c7baa2aa10", 00:10:48.551 "is_configured": true, 00:10:48.551 "data_offset": 2048, 00:10:48.551 "data_size": 63488 00:10:48.551 }, 00:10:48.551 { 00:10:48.551 "name": "BaseBdev4", 00:10:48.551 "uuid": "d2e010b9-c3c3-5a62-85e7-87c0c39b5fa6", 00:10:48.551 "is_configured": true, 00:10:48.551 "data_offset": 2048, 00:10:48.551 "data_size": 63488 00:10:48.551 } 00:10:48.551 ] 00:10:48.551 }' 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.551 21:42:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.820 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:48.820 21:42:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:49.079 [2024-09-29 21:42:07.815578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.018 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.018 "name": "raid_bdev1", 00:10:50.018 "uuid": "b88ac0f7-322e-4890-8a6f-4991094ccaed", 00:10:50.018 "strip_size_kb": 64, 00:10:50.018 "state": "online", 00:10:50.018 "raid_level": "raid0", 00:10:50.018 "superblock": true, 00:10:50.018 "num_base_bdevs": 4, 00:10:50.018 "num_base_bdevs_discovered": 4, 00:10:50.018 "num_base_bdevs_operational": 4, 00:10:50.018 "base_bdevs_list": [ 00:10:50.018 { 00:10:50.018 "name": "BaseBdev1", 00:10:50.018 "uuid": "d72636a8-2bd8-512f-9101-c17494254685", 00:10:50.018 "is_configured": true, 00:10:50.018 "data_offset": 2048, 00:10:50.018 "data_size": 63488 00:10:50.018 }, 00:10:50.018 { 00:10:50.018 "name": "BaseBdev2", 00:10:50.018 "uuid": "10ae04ab-a44a-54ae-b0d7-6608d4c72a25", 00:10:50.018 "is_configured": true, 00:10:50.018 "data_offset": 2048, 00:10:50.018 "data_size": 63488 00:10:50.018 }, 00:10:50.018 { 00:10:50.018 "name": "BaseBdev3", 00:10:50.018 "uuid": "aa938510-806f-507c-abda-d3c7baa2aa10", 00:10:50.018 "is_configured": true, 00:10:50.018 "data_offset": 2048, 00:10:50.018 "data_size": 63488 00:10:50.018 }, 00:10:50.018 { 00:10:50.018 "name": "BaseBdev4", 00:10:50.019 "uuid": "d2e010b9-c3c3-5a62-85e7-87c0c39b5fa6", 00:10:50.019 "is_configured": true, 00:10:50.019 "data_offset": 2048, 00:10:50.019 "data_size": 63488 00:10:50.019 } 00:10:50.019 ] 00:10:50.019 }' 00:10:50.019 21:42:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.019 21:42:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.278 [2024-09-29 21:42:09.220203] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.278 [2024-09-29 21:42:09.220313] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.278 [2024-09-29 21:42:09.222832] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.278 [2024-09-29 21:42:09.222953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.278 [2024-09-29 21:42:09.223019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.278 [2024-09-29 21:42:09.223085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:50.278 { 00:10:50.278 "results": [ 00:10:50.278 { 00:10:50.278 "job": "raid_bdev1", 00:10:50.278 "core_mask": "0x1", 00:10:50.278 "workload": "randrw", 00:10:50.278 "percentage": 50, 00:10:50.278 "status": "finished", 00:10:50.278 "queue_depth": 1, 00:10:50.278 "io_size": 131072, 00:10:50.278 "runtime": 1.405328, 00:10:50.278 "iops": 14387.388566939533, 00:10:50.278 "mibps": 1798.4235708674416, 00:10:50.278 "io_failed": 1, 00:10:50.278 "io_timeout": 0, 00:10:50.278 "avg_latency_us": 98.15981858940303, 00:10:50.278 "min_latency_us": 24.593886462882097, 00:10:50.278 "max_latency_us": 1366.5257641921398 00:10:50.278 } 00:10:50.278 ], 00:10:50.278 "core_count": 1 00:10:50.278 } 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71218 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71218 ']' 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71218 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.278 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71218 00:10:50.537 killing process with pid 71218 00:10:50.537 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.537 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.537 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71218' 00:10:50.537 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71218 00:10:50.537 [2024-09-29 21:42:09.265440] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:50.537 21:42:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71218 00:10:50.797 [2024-09-29 21:42:09.603262] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wyqR8Ps3MP 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:52.178 00:10:52.178 real 0m4.926s 00:10:52.178 user 0m5.578s 00:10:52.178 sys 0m0.737s 00:10:52.178 ************************************ 00:10:52.178 END TEST raid_write_error_test 00:10:52.178 ************************************ 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:52.178 21:42:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.178 21:42:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:52.178 21:42:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:52.178 21:42:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:52.178 21:42:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:52.178 21:42:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.178 ************************************ 00:10:52.178 START TEST raid_state_function_test 00:10:52.178 ************************************ 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71363 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71363' 00:10:52.178 Process raid pid: 71363 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71363 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71363 ']' 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:52.178 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.178 [2024-09-29 21:42:11.160786] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:52.178 [2024-09-29 21:42:11.160899] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.438 [2024-09-29 21:42:11.324948] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.697 [2024-09-29 21:42:11.578336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.957 [2024-09-29 21:42:11.812853] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.957 [2024-09-29 21:42:11.812992] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.217 [2024-09-29 21:42:11.978092] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.217 [2024-09-29 21:42:11.978150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.217 [2024-09-29 21:42:11.978161] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.217 [2024-09-29 21:42:11.978171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.217 [2024-09-29 21:42:11.978177] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.217 [2024-09-29 21:42:11.978188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.217 [2024-09-29 21:42:11.978195] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.217 [2024-09-29 21:42:11.978205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.217 21:42:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.217 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.217 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.217 "name": "Existed_Raid", 00:10:53.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.217 "strip_size_kb": 64, 00:10:53.217 "state": "configuring", 00:10:53.217 "raid_level": "concat", 00:10:53.217 "superblock": false, 00:10:53.217 "num_base_bdevs": 4, 00:10:53.217 "num_base_bdevs_discovered": 0, 00:10:53.217 "num_base_bdevs_operational": 4, 00:10:53.218 "base_bdevs_list": [ 00:10:53.218 { 00:10:53.218 "name": "BaseBdev1", 00:10:53.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.218 "is_configured": false, 00:10:53.218 "data_offset": 0, 00:10:53.218 "data_size": 0 00:10:53.218 }, 00:10:53.218 { 00:10:53.218 "name": "BaseBdev2", 00:10:53.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.218 "is_configured": false, 00:10:53.218 "data_offset": 0, 00:10:53.218 "data_size": 0 00:10:53.218 }, 00:10:53.218 { 00:10:53.218 "name": "BaseBdev3", 00:10:53.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.218 "is_configured": false, 00:10:53.218 "data_offset": 0, 00:10:53.218 "data_size": 0 00:10:53.218 }, 00:10:53.218 { 00:10:53.218 "name": "BaseBdev4", 00:10:53.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.218 "is_configured": false, 00:10:53.218 "data_offset": 0, 00:10:53.218 "data_size": 0 00:10:53.218 } 00:10:53.218 ] 00:10:53.218 }' 00:10:53.218 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.218 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.477 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.477 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.477 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.477 [2024-09-29 21:42:12.433203] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.477 [2024-09-29 21:42:12.433332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:53.477 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.477 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:53.477 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.477 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.477 [2024-09-29 21:42:12.441234] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:53.477 [2024-09-29 21:42:12.441318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:53.477 [2024-09-29 21:42:12.441344] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:53.477 [2024-09-29 21:42:12.441367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:53.477 [2024-09-29 21:42:12.441384] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:53.478 [2024-09-29 21:42:12.441405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:53.478 [2024-09-29 21:42:12.441423] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:53.478 [2024-09-29 21:42:12.441444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:53.478 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.478 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:53.478 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.478 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.738 [2024-09-29 21:42:12.525901] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.738 BaseBdev1 00:10:53.738 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.738 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:53.738 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:53.738 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:53.738 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:53.738 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:53.738 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:53.738 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:53.738 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.739 [ 00:10:53.739 { 00:10:53.739 "name": "BaseBdev1", 00:10:53.739 "aliases": [ 00:10:53.739 "00a38aba-1b84-497b-903b-753fa9bb0f17" 00:10:53.739 ], 00:10:53.739 "product_name": "Malloc disk", 00:10:53.739 "block_size": 512, 00:10:53.739 "num_blocks": 65536, 00:10:53.739 "uuid": "00a38aba-1b84-497b-903b-753fa9bb0f17", 00:10:53.739 "assigned_rate_limits": { 00:10:53.739 "rw_ios_per_sec": 0, 00:10:53.739 "rw_mbytes_per_sec": 0, 00:10:53.739 "r_mbytes_per_sec": 0, 00:10:53.739 "w_mbytes_per_sec": 0 00:10:53.739 }, 00:10:53.739 "claimed": true, 00:10:53.739 "claim_type": "exclusive_write", 00:10:53.739 "zoned": false, 00:10:53.739 "supported_io_types": { 00:10:53.739 "read": true, 00:10:53.739 "write": true, 00:10:53.739 "unmap": true, 00:10:53.739 "flush": true, 00:10:53.739 "reset": true, 00:10:53.739 "nvme_admin": false, 00:10:53.739 "nvme_io": false, 00:10:53.739 "nvme_io_md": false, 00:10:53.739 "write_zeroes": true, 00:10:53.739 "zcopy": true, 00:10:53.739 "get_zone_info": false, 00:10:53.739 "zone_management": false, 00:10:53.739 "zone_append": false, 00:10:53.739 "compare": false, 00:10:53.739 "compare_and_write": false, 00:10:53.739 "abort": true, 00:10:53.739 "seek_hole": false, 00:10:53.739 "seek_data": false, 00:10:53.739 "copy": true, 00:10:53.739 "nvme_iov_md": false 00:10:53.739 }, 00:10:53.739 "memory_domains": [ 00:10:53.739 { 00:10:53.739 "dma_device_id": "system", 00:10:53.739 "dma_device_type": 1 00:10:53.739 }, 00:10:53.739 { 00:10:53.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.739 "dma_device_type": 2 00:10:53.739 } 00:10:53.739 ], 00:10:53.739 "driver_specific": {} 00:10:53.739 } 00:10:53.739 ] 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.739 "name": "Existed_Raid", 00:10:53.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.739 "strip_size_kb": 64, 00:10:53.739 "state": "configuring", 00:10:53.739 "raid_level": "concat", 00:10:53.739 "superblock": false, 00:10:53.739 "num_base_bdevs": 4, 00:10:53.739 "num_base_bdevs_discovered": 1, 00:10:53.739 "num_base_bdevs_operational": 4, 00:10:53.739 "base_bdevs_list": [ 00:10:53.739 { 00:10:53.739 "name": "BaseBdev1", 00:10:53.739 "uuid": "00a38aba-1b84-497b-903b-753fa9bb0f17", 00:10:53.739 "is_configured": true, 00:10:53.739 "data_offset": 0, 00:10:53.739 "data_size": 65536 00:10:53.739 }, 00:10:53.739 { 00:10:53.739 "name": "BaseBdev2", 00:10:53.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.739 "is_configured": false, 00:10:53.739 "data_offset": 0, 00:10:53.739 "data_size": 0 00:10:53.739 }, 00:10:53.739 { 00:10:53.739 "name": "BaseBdev3", 00:10:53.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.739 "is_configured": false, 00:10:53.739 "data_offset": 0, 00:10:53.739 "data_size": 0 00:10:53.739 }, 00:10:53.739 { 00:10:53.739 "name": "BaseBdev4", 00:10:53.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:53.739 "is_configured": false, 00:10:53.739 "data_offset": 0, 00:10:53.739 "data_size": 0 00:10:53.739 } 00:10:53.739 ] 00:10:53.739 }' 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.739 21:42:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.308 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.308 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.308 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.308 [2024-09-29 21:42:13.025066] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.308 [2024-09-29 21:42:13.025112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:54.308 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.308 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:54.308 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.308 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.308 [2024-09-29 21:42:13.037105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.308 [2024-09-29 21:42:13.039146] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:54.308 [2024-09-29 21:42:13.039188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:54.308 [2024-09-29 21:42:13.039198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:54.308 [2024-09-29 21:42:13.039208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:54.308 [2024-09-29 21:42:13.039215] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:54.309 [2024-09-29 21:42:13.039224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.309 "name": "Existed_Raid", 00:10:54.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.309 "strip_size_kb": 64, 00:10:54.309 "state": "configuring", 00:10:54.309 "raid_level": "concat", 00:10:54.309 "superblock": false, 00:10:54.309 "num_base_bdevs": 4, 00:10:54.309 "num_base_bdevs_discovered": 1, 00:10:54.309 "num_base_bdevs_operational": 4, 00:10:54.309 "base_bdevs_list": [ 00:10:54.309 { 00:10:54.309 "name": "BaseBdev1", 00:10:54.309 "uuid": "00a38aba-1b84-497b-903b-753fa9bb0f17", 00:10:54.309 "is_configured": true, 00:10:54.309 "data_offset": 0, 00:10:54.309 "data_size": 65536 00:10:54.309 }, 00:10:54.309 { 00:10:54.309 "name": "BaseBdev2", 00:10:54.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.309 "is_configured": false, 00:10:54.309 "data_offset": 0, 00:10:54.309 "data_size": 0 00:10:54.309 }, 00:10:54.309 { 00:10:54.309 "name": "BaseBdev3", 00:10:54.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.309 "is_configured": false, 00:10:54.309 "data_offset": 0, 00:10:54.309 "data_size": 0 00:10:54.309 }, 00:10:54.309 { 00:10:54.309 "name": "BaseBdev4", 00:10:54.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.309 "is_configured": false, 00:10:54.309 "data_offset": 0, 00:10:54.309 "data_size": 0 00:10:54.309 } 00:10:54.309 ] 00:10:54.309 }' 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.309 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.569 [2024-09-29 21:42:13.540555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.569 BaseBdev2 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.569 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.829 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.829 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:54.829 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.829 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.829 [ 00:10:54.829 { 00:10:54.829 "name": "BaseBdev2", 00:10:54.829 "aliases": [ 00:10:54.829 "101adca2-d907-4483-967a-7ce032c86ad1" 00:10:54.829 ], 00:10:54.829 "product_name": "Malloc disk", 00:10:54.829 "block_size": 512, 00:10:54.829 "num_blocks": 65536, 00:10:54.829 "uuid": "101adca2-d907-4483-967a-7ce032c86ad1", 00:10:54.829 "assigned_rate_limits": { 00:10:54.829 "rw_ios_per_sec": 0, 00:10:54.829 "rw_mbytes_per_sec": 0, 00:10:54.829 "r_mbytes_per_sec": 0, 00:10:54.829 "w_mbytes_per_sec": 0 00:10:54.829 }, 00:10:54.829 "claimed": true, 00:10:54.830 "claim_type": "exclusive_write", 00:10:54.830 "zoned": false, 00:10:54.830 "supported_io_types": { 00:10:54.830 "read": true, 00:10:54.830 "write": true, 00:10:54.830 "unmap": true, 00:10:54.830 "flush": true, 00:10:54.830 "reset": true, 00:10:54.830 "nvme_admin": false, 00:10:54.830 "nvme_io": false, 00:10:54.830 "nvme_io_md": false, 00:10:54.830 "write_zeroes": true, 00:10:54.830 "zcopy": true, 00:10:54.830 "get_zone_info": false, 00:10:54.830 "zone_management": false, 00:10:54.830 "zone_append": false, 00:10:54.830 "compare": false, 00:10:54.830 "compare_and_write": false, 00:10:54.830 "abort": true, 00:10:54.830 "seek_hole": false, 00:10:54.830 "seek_data": false, 00:10:54.830 "copy": true, 00:10:54.830 "nvme_iov_md": false 00:10:54.830 }, 00:10:54.830 "memory_domains": [ 00:10:54.830 { 00:10:54.830 "dma_device_id": "system", 00:10:54.830 "dma_device_type": 1 00:10:54.830 }, 00:10:54.830 { 00:10:54.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.830 "dma_device_type": 2 00:10:54.830 } 00:10:54.830 ], 00:10:54.830 "driver_specific": {} 00:10:54.830 } 00:10:54.830 ] 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.830 "name": "Existed_Raid", 00:10:54.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.830 "strip_size_kb": 64, 00:10:54.830 "state": "configuring", 00:10:54.830 "raid_level": "concat", 00:10:54.830 "superblock": false, 00:10:54.830 "num_base_bdevs": 4, 00:10:54.830 "num_base_bdevs_discovered": 2, 00:10:54.830 "num_base_bdevs_operational": 4, 00:10:54.830 "base_bdevs_list": [ 00:10:54.830 { 00:10:54.830 "name": "BaseBdev1", 00:10:54.830 "uuid": "00a38aba-1b84-497b-903b-753fa9bb0f17", 00:10:54.830 "is_configured": true, 00:10:54.830 "data_offset": 0, 00:10:54.830 "data_size": 65536 00:10:54.830 }, 00:10:54.830 { 00:10:54.830 "name": "BaseBdev2", 00:10:54.830 "uuid": "101adca2-d907-4483-967a-7ce032c86ad1", 00:10:54.830 "is_configured": true, 00:10:54.830 "data_offset": 0, 00:10:54.830 "data_size": 65536 00:10:54.830 }, 00:10:54.830 { 00:10:54.830 "name": "BaseBdev3", 00:10:54.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.830 "is_configured": false, 00:10:54.830 "data_offset": 0, 00:10:54.830 "data_size": 0 00:10:54.830 }, 00:10:54.830 { 00:10:54.830 "name": "BaseBdev4", 00:10:54.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.830 "is_configured": false, 00:10:54.830 "data_offset": 0, 00:10:54.830 "data_size": 0 00:10:54.830 } 00:10:54.830 ] 00:10:54.830 }' 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.830 21:42:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.090 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:55.090 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.090 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.349 [2024-09-29 21:42:14.095369] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.349 BaseBdev3 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.349 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.349 [ 00:10:55.349 { 00:10:55.349 "name": "BaseBdev3", 00:10:55.349 "aliases": [ 00:10:55.349 "f84329cf-3fa9-4452-96d7-5a112fed501d" 00:10:55.349 ], 00:10:55.349 "product_name": "Malloc disk", 00:10:55.349 "block_size": 512, 00:10:55.349 "num_blocks": 65536, 00:10:55.349 "uuid": "f84329cf-3fa9-4452-96d7-5a112fed501d", 00:10:55.349 "assigned_rate_limits": { 00:10:55.350 "rw_ios_per_sec": 0, 00:10:55.350 "rw_mbytes_per_sec": 0, 00:10:55.350 "r_mbytes_per_sec": 0, 00:10:55.350 "w_mbytes_per_sec": 0 00:10:55.350 }, 00:10:55.350 "claimed": true, 00:10:55.350 "claim_type": "exclusive_write", 00:10:55.350 "zoned": false, 00:10:55.350 "supported_io_types": { 00:10:55.350 "read": true, 00:10:55.350 "write": true, 00:10:55.350 "unmap": true, 00:10:55.350 "flush": true, 00:10:55.350 "reset": true, 00:10:55.350 "nvme_admin": false, 00:10:55.350 "nvme_io": false, 00:10:55.350 "nvme_io_md": false, 00:10:55.350 "write_zeroes": true, 00:10:55.350 "zcopy": true, 00:10:55.350 "get_zone_info": false, 00:10:55.350 "zone_management": false, 00:10:55.350 "zone_append": false, 00:10:55.350 "compare": false, 00:10:55.350 "compare_and_write": false, 00:10:55.350 "abort": true, 00:10:55.350 "seek_hole": false, 00:10:55.350 "seek_data": false, 00:10:55.350 "copy": true, 00:10:55.350 "nvme_iov_md": false 00:10:55.350 }, 00:10:55.350 "memory_domains": [ 00:10:55.350 { 00:10:55.350 "dma_device_id": "system", 00:10:55.350 "dma_device_type": 1 00:10:55.350 }, 00:10:55.350 { 00:10:55.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.350 "dma_device_type": 2 00:10:55.350 } 00:10:55.350 ], 00:10:55.350 "driver_specific": {} 00:10:55.350 } 00:10:55.350 ] 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.350 "name": "Existed_Raid", 00:10:55.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.350 "strip_size_kb": 64, 00:10:55.350 "state": "configuring", 00:10:55.350 "raid_level": "concat", 00:10:55.350 "superblock": false, 00:10:55.350 "num_base_bdevs": 4, 00:10:55.350 "num_base_bdevs_discovered": 3, 00:10:55.350 "num_base_bdevs_operational": 4, 00:10:55.350 "base_bdevs_list": [ 00:10:55.350 { 00:10:55.350 "name": "BaseBdev1", 00:10:55.350 "uuid": "00a38aba-1b84-497b-903b-753fa9bb0f17", 00:10:55.350 "is_configured": true, 00:10:55.350 "data_offset": 0, 00:10:55.350 "data_size": 65536 00:10:55.350 }, 00:10:55.350 { 00:10:55.350 "name": "BaseBdev2", 00:10:55.350 "uuid": "101adca2-d907-4483-967a-7ce032c86ad1", 00:10:55.350 "is_configured": true, 00:10:55.350 "data_offset": 0, 00:10:55.350 "data_size": 65536 00:10:55.350 }, 00:10:55.350 { 00:10:55.350 "name": "BaseBdev3", 00:10:55.350 "uuid": "f84329cf-3fa9-4452-96d7-5a112fed501d", 00:10:55.350 "is_configured": true, 00:10:55.350 "data_offset": 0, 00:10:55.350 "data_size": 65536 00:10:55.350 }, 00:10:55.350 { 00:10:55.350 "name": "BaseBdev4", 00:10:55.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.350 "is_configured": false, 00:10:55.350 "data_offset": 0, 00:10:55.350 "data_size": 0 00:10:55.350 } 00:10:55.350 ] 00:10:55.350 }' 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.350 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.610 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:55.610 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.610 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.869 [2024-09-29 21:42:14.609281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.869 [2024-09-29 21:42:14.609415] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:55.869 [2024-09-29 21:42:14.609442] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:55.869 [2024-09-29 21:42:14.609819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:55.869 [2024-09-29 21:42:14.610070] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:55.869 [2024-09-29 21:42:14.610119] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:55.869 [2024-09-29 21:42:14.610432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.869 BaseBdev4 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.869 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.869 [ 00:10:55.869 { 00:10:55.869 "name": "BaseBdev4", 00:10:55.869 "aliases": [ 00:10:55.869 "e9ab7086-ceab-4719-a94f-72e1f6c433ee" 00:10:55.869 ], 00:10:55.869 "product_name": "Malloc disk", 00:10:55.869 "block_size": 512, 00:10:55.869 "num_blocks": 65536, 00:10:55.869 "uuid": "e9ab7086-ceab-4719-a94f-72e1f6c433ee", 00:10:55.869 "assigned_rate_limits": { 00:10:55.869 "rw_ios_per_sec": 0, 00:10:55.869 "rw_mbytes_per_sec": 0, 00:10:55.869 "r_mbytes_per_sec": 0, 00:10:55.869 "w_mbytes_per_sec": 0 00:10:55.869 }, 00:10:55.869 "claimed": true, 00:10:55.869 "claim_type": "exclusive_write", 00:10:55.869 "zoned": false, 00:10:55.869 "supported_io_types": { 00:10:55.869 "read": true, 00:10:55.869 "write": true, 00:10:55.869 "unmap": true, 00:10:55.869 "flush": true, 00:10:55.869 "reset": true, 00:10:55.869 "nvme_admin": false, 00:10:55.869 "nvme_io": false, 00:10:55.869 "nvme_io_md": false, 00:10:55.869 "write_zeroes": true, 00:10:55.870 "zcopy": true, 00:10:55.870 "get_zone_info": false, 00:10:55.870 "zone_management": false, 00:10:55.870 "zone_append": false, 00:10:55.870 "compare": false, 00:10:55.870 "compare_and_write": false, 00:10:55.870 "abort": true, 00:10:55.870 "seek_hole": false, 00:10:55.870 "seek_data": false, 00:10:55.870 "copy": true, 00:10:55.870 "nvme_iov_md": false 00:10:55.870 }, 00:10:55.870 "memory_domains": [ 00:10:55.870 { 00:10:55.870 "dma_device_id": "system", 00:10:55.870 "dma_device_type": 1 00:10:55.870 }, 00:10:55.870 { 00:10:55.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.870 "dma_device_type": 2 00:10:55.870 } 00:10:55.870 ], 00:10:55.870 "driver_specific": {} 00:10:55.870 } 00:10:55.870 ] 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.870 "name": "Existed_Raid", 00:10:55.870 "uuid": "98871fc5-184a-45ef-b0eb-3423773cec29", 00:10:55.870 "strip_size_kb": 64, 00:10:55.870 "state": "online", 00:10:55.870 "raid_level": "concat", 00:10:55.870 "superblock": false, 00:10:55.870 "num_base_bdevs": 4, 00:10:55.870 "num_base_bdevs_discovered": 4, 00:10:55.870 "num_base_bdevs_operational": 4, 00:10:55.870 "base_bdevs_list": [ 00:10:55.870 { 00:10:55.870 "name": "BaseBdev1", 00:10:55.870 "uuid": "00a38aba-1b84-497b-903b-753fa9bb0f17", 00:10:55.870 "is_configured": true, 00:10:55.870 "data_offset": 0, 00:10:55.870 "data_size": 65536 00:10:55.870 }, 00:10:55.870 { 00:10:55.870 "name": "BaseBdev2", 00:10:55.870 "uuid": "101adca2-d907-4483-967a-7ce032c86ad1", 00:10:55.870 "is_configured": true, 00:10:55.870 "data_offset": 0, 00:10:55.870 "data_size": 65536 00:10:55.870 }, 00:10:55.870 { 00:10:55.870 "name": "BaseBdev3", 00:10:55.870 "uuid": "f84329cf-3fa9-4452-96d7-5a112fed501d", 00:10:55.870 "is_configured": true, 00:10:55.870 "data_offset": 0, 00:10:55.870 "data_size": 65536 00:10:55.870 }, 00:10:55.870 { 00:10:55.870 "name": "BaseBdev4", 00:10:55.870 "uuid": "e9ab7086-ceab-4719-a94f-72e1f6c433ee", 00:10:55.870 "is_configured": true, 00:10:55.870 "data_offset": 0, 00:10:55.870 "data_size": 65536 00:10:55.870 } 00:10:55.870 ] 00:10:55.870 }' 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.870 21:42:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 [2024-09-29 21:42:15.144762] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.439 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.439 "name": "Existed_Raid", 00:10:56.439 "aliases": [ 00:10:56.439 "98871fc5-184a-45ef-b0eb-3423773cec29" 00:10:56.439 ], 00:10:56.439 "product_name": "Raid Volume", 00:10:56.439 "block_size": 512, 00:10:56.439 "num_blocks": 262144, 00:10:56.439 "uuid": "98871fc5-184a-45ef-b0eb-3423773cec29", 00:10:56.439 "assigned_rate_limits": { 00:10:56.439 "rw_ios_per_sec": 0, 00:10:56.439 "rw_mbytes_per_sec": 0, 00:10:56.439 "r_mbytes_per_sec": 0, 00:10:56.439 "w_mbytes_per_sec": 0 00:10:56.439 }, 00:10:56.439 "claimed": false, 00:10:56.439 "zoned": false, 00:10:56.439 "supported_io_types": { 00:10:56.439 "read": true, 00:10:56.439 "write": true, 00:10:56.439 "unmap": true, 00:10:56.439 "flush": true, 00:10:56.439 "reset": true, 00:10:56.439 "nvme_admin": false, 00:10:56.439 "nvme_io": false, 00:10:56.439 "nvme_io_md": false, 00:10:56.439 "write_zeroes": true, 00:10:56.439 "zcopy": false, 00:10:56.439 "get_zone_info": false, 00:10:56.439 "zone_management": false, 00:10:56.439 "zone_append": false, 00:10:56.439 "compare": false, 00:10:56.439 "compare_and_write": false, 00:10:56.439 "abort": false, 00:10:56.439 "seek_hole": false, 00:10:56.439 "seek_data": false, 00:10:56.439 "copy": false, 00:10:56.439 "nvme_iov_md": false 00:10:56.440 }, 00:10:56.440 "memory_domains": [ 00:10:56.440 { 00:10:56.440 "dma_device_id": "system", 00:10:56.440 "dma_device_type": 1 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.440 "dma_device_type": 2 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "dma_device_id": "system", 00:10:56.440 "dma_device_type": 1 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.440 "dma_device_type": 2 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "dma_device_id": "system", 00:10:56.440 "dma_device_type": 1 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.440 "dma_device_type": 2 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "dma_device_id": "system", 00:10:56.440 "dma_device_type": 1 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.440 "dma_device_type": 2 00:10:56.440 } 00:10:56.440 ], 00:10:56.440 "driver_specific": { 00:10:56.440 "raid": { 00:10:56.440 "uuid": "98871fc5-184a-45ef-b0eb-3423773cec29", 00:10:56.440 "strip_size_kb": 64, 00:10:56.440 "state": "online", 00:10:56.440 "raid_level": "concat", 00:10:56.440 "superblock": false, 00:10:56.440 "num_base_bdevs": 4, 00:10:56.440 "num_base_bdevs_discovered": 4, 00:10:56.440 "num_base_bdevs_operational": 4, 00:10:56.440 "base_bdevs_list": [ 00:10:56.440 { 00:10:56.440 "name": "BaseBdev1", 00:10:56.440 "uuid": "00a38aba-1b84-497b-903b-753fa9bb0f17", 00:10:56.440 "is_configured": true, 00:10:56.440 "data_offset": 0, 00:10:56.440 "data_size": 65536 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "name": "BaseBdev2", 00:10:56.440 "uuid": "101adca2-d907-4483-967a-7ce032c86ad1", 00:10:56.440 "is_configured": true, 00:10:56.440 "data_offset": 0, 00:10:56.440 "data_size": 65536 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "name": "BaseBdev3", 00:10:56.440 "uuid": "f84329cf-3fa9-4452-96d7-5a112fed501d", 00:10:56.440 "is_configured": true, 00:10:56.440 "data_offset": 0, 00:10:56.440 "data_size": 65536 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "name": "BaseBdev4", 00:10:56.440 "uuid": "e9ab7086-ceab-4719-a94f-72e1f6c433ee", 00:10:56.440 "is_configured": true, 00:10:56.440 "data_offset": 0, 00:10:56.440 "data_size": 65536 00:10:56.440 } 00:10:56.440 ] 00:10:56.440 } 00:10:56.440 } 00:10:56.440 }' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:56.440 BaseBdev2 00:10:56.440 BaseBdev3 00:10:56.440 BaseBdev4' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.700 [2024-09-29 21:42:15.463942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:56.700 [2024-09-29 21:42:15.464018] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.700 [2024-09-29 21:42:15.464102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.700 "name": "Existed_Raid", 00:10:56.700 "uuid": "98871fc5-184a-45ef-b0eb-3423773cec29", 00:10:56.700 "strip_size_kb": 64, 00:10:56.700 "state": "offline", 00:10:56.700 "raid_level": "concat", 00:10:56.700 "superblock": false, 00:10:56.700 "num_base_bdevs": 4, 00:10:56.700 "num_base_bdevs_discovered": 3, 00:10:56.700 "num_base_bdevs_operational": 3, 00:10:56.700 "base_bdevs_list": [ 00:10:56.700 { 00:10:56.700 "name": null, 00:10:56.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.700 "is_configured": false, 00:10:56.700 "data_offset": 0, 00:10:56.700 "data_size": 65536 00:10:56.700 }, 00:10:56.700 { 00:10:56.700 "name": "BaseBdev2", 00:10:56.700 "uuid": "101adca2-d907-4483-967a-7ce032c86ad1", 00:10:56.700 "is_configured": true, 00:10:56.700 "data_offset": 0, 00:10:56.700 "data_size": 65536 00:10:56.700 }, 00:10:56.700 { 00:10:56.700 "name": "BaseBdev3", 00:10:56.700 "uuid": "f84329cf-3fa9-4452-96d7-5a112fed501d", 00:10:56.700 "is_configured": true, 00:10:56.700 "data_offset": 0, 00:10:56.700 "data_size": 65536 00:10:56.700 }, 00:10:56.700 { 00:10:56.700 "name": "BaseBdev4", 00:10:56.700 "uuid": "e9ab7086-ceab-4719-a94f-72e1f6c433ee", 00:10:56.700 "is_configured": true, 00:10:56.700 "data_offset": 0, 00:10:56.700 "data_size": 65536 00:10:56.700 } 00:10:56.700 ] 00:10:56.700 }' 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.700 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.269 [2024-09-29 21:42:16.082709] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.269 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.270 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.270 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.270 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.270 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.270 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:57.270 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.270 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.270 [2024-09-29 21:42:16.242816] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.529 [2024-09-29 21:42:16.401427] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:57.529 [2024-09-29 21:42:16.401601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:57.529 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.790 BaseBdev2 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.790 [ 00:10:57.790 { 00:10:57.790 "name": "BaseBdev2", 00:10:57.790 "aliases": [ 00:10:57.790 "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757" 00:10:57.790 ], 00:10:57.790 "product_name": "Malloc disk", 00:10:57.790 "block_size": 512, 00:10:57.790 "num_blocks": 65536, 00:10:57.790 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:10:57.790 "assigned_rate_limits": { 00:10:57.790 "rw_ios_per_sec": 0, 00:10:57.790 "rw_mbytes_per_sec": 0, 00:10:57.790 "r_mbytes_per_sec": 0, 00:10:57.790 "w_mbytes_per_sec": 0 00:10:57.790 }, 00:10:57.790 "claimed": false, 00:10:57.790 "zoned": false, 00:10:57.790 "supported_io_types": { 00:10:57.790 "read": true, 00:10:57.790 "write": true, 00:10:57.790 "unmap": true, 00:10:57.790 "flush": true, 00:10:57.790 "reset": true, 00:10:57.790 "nvme_admin": false, 00:10:57.790 "nvme_io": false, 00:10:57.790 "nvme_io_md": false, 00:10:57.790 "write_zeroes": true, 00:10:57.790 "zcopy": true, 00:10:57.790 "get_zone_info": false, 00:10:57.790 "zone_management": false, 00:10:57.790 "zone_append": false, 00:10:57.790 "compare": false, 00:10:57.790 "compare_and_write": false, 00:10:57.790 "abort": true, 00:10:57.790 "seek_hole": false, 00:10:57.790 "seek_data": false, 00:10:57.790 "copy": true, 00:10:57.790 "nvme_iov_md": false 00:10:57.790 }, 00:10:57.790 "memory_domains": [ 00:10:57.790 { 00:10:57.790 "dma_device_id": "system", 00:10:57.790 "dma_device_type": 1 00:10:57.790 }, 00:10:57.790 { 00:10:57.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.790 "dma_device_type": 2 00:10:57.790 } 00:10:57.790 ], 00:10:57.790 "driver_specific": {} 00:10:57.790 } 00:10:57.790 ] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.790 BaseBdev3 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.790 [ 00:10:57.790 { 00:10:57.790 "name": "BaseBdev3", 00:10:57.790 "aliases": [ 00:10:57.790 "7a371522-78ac-47c5-b211-fb1a30f82cc6" 00:10:57.790 ], 00:10:57.790 "product_name": "Malloc disk", 00:10:57.790 "block_size": 512, 00:10:57.790 "num_blocks": 65536, 00:10:57.790 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:10:57.790 "assigned_rate_limits": { 00:10:57.790 "rw_ios_per_sec": 0, 00:10:57.790 "rw_mbytes_per_sec": 0, 00:10:57.790 "r_mbytes_per_sec": 0, 00:10:57.790 "w_mbytes_per_sec": 0 00:10:57.790 }, 00:10:57.790 "claimed": false, 00:10:57.790 "zoned": false, 00:10:57.790 "supported_io_types": { 00:10:57.790 "read": true, 00:10:57.790 "write": true, 00:10:57.790 "unmap": true, 00:10:57.790 "flush": true, 00:10:57.790 "reset": true, 00:10:57.790 "nvme_admin": false, 00:10:57.790 "nvme_io": false, 00:10:57.790 "nvme_io_md": false, 00:10:57.790 "write_zeroes": true, 00:10:57.790 "zcopy": true, 00:10:57.790 "get_zone_info": false, 00:10:57.790 "zone_management": false, 00:10:57.790 "zone_append": false, 00:10:57.790 "compare": false, 00:10:57.790 "compare_and_write": false, 00:10:57.790 "abort": true, 00:10:57.790 "seek_hole": false, 00:10:57.790 "seek_data": false, 00:10:57.790 "copy": true, 00:10:57.790 "nvme_iov_md": false 00:10:57.790 }, 00:10:57.790 "memory_domains": [ 00:10:57.790 { 00:10:57.790 "dma_device_id": "system", 00:10:57.790 "dma_device_type": 1 00:10:57.790 }, 00:10:57.790 { 00:10:57.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.790 "dma_device_type": 2 00:10:57.790 } 00:10:57.790 ], 00:10:57.790 "driver_specific": {} 00:10:57.790 } 00:10:57.790 ] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.790 BaseBdev4 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:57.790 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.050 [ 00:10:58.050 { 00:10:58.050 "name": "BaseBdev4", 00:10:58.050 "aliases": [ 00:10:58.050 "f423be39-6282-4220-a68d-52d0e6ad5d8a" 00:10:58.050 ], 00:10:58.050 "product_name": "Malloc disk", 00:10:58.050 "block_size": 512, 00:10:58.050 "num_blocks": 65536, 00:10:58.050 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:10:58.050 "assigned_rate_limits": { 00:10:58.050 "rw_ios_per_sec": 0, 00:10:58.050 "rw_mbytes_per_sec": 0, 00:10:58.050 "r_mbytes_per_sec": 0, 00:10:58.050 "w_mbytes_per_sec": 0 00:10:58.050 }, 00:10:58.050 "claimed": false, 00:10:58.050 "zoned": false, 00:10:58.050 "supported_io_types": { 00:10:58.050 "read": true, 00:10:58.050 "write": true, 00:10:58.050 "unmap": true, 00:10:58.050 "flush": true, 00:10:58.050 "reset": true, 00:10:58.050 "nvme_admin": false, 00:10:58.050 "nvme_io": false, 00:10:58.050 "nvme_io_md": false, 00:10:58.050 "write_zeroes": true, 00:10:58.050 "zcopy": true, 00:10:58.050 "get_zone_info": false, 00:10:58.050 "zone_management": false, 00:10:58.050 "zone_append": false, 00:10:58.050 "compare": false, 00:10:58.050 "compare_and_write": false, 00:10:58.050 "abort": true, 00:10:58.050 "seek_hole": false, 00:10:58.050 "seek_data": false, 00:10:58.050 "copy": true, 00:10:58.050 "nvme_iov_md": false 00:10:58.050 }, 00:10:58.050 "memory_domains": [ 00:10:58.050 { 00:10:58.050 "dma_device_id": "system", 00:10:58.050 "dma_device_type": 1 00:10:58.050 }, 00:10:58.050 { 00:10:58.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.050 "dma_device_type": 2 00:10:58.050 } 00:10:58.050 ], 00:10:58.050 "driver_specific": {} 00:10:58.050 } 00:10:58.050 ] 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.050 [2024-09-29 21:42:16.818659] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.050 [2024-09-29 21:42:16.818707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.050 [2024-09-29 21:42:16.818728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.050 [2024-09-29 21:42:16.820771] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.050 [2024-09-29 21:42:16.820828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.050 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.051 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.051 "name": "Existed_Raid", 00:10:58.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.051 "strip_size_kb": 64, 00:10:58.051 "state": "configuring", 00:10:58.051 "raid_level": "concat", 00:10:58.051 "superblock": false, 00:10:58.051 "num_base_bdevs": 4, 00:10:58.051 "num_base_bdevs_discovered": 3, 00:10:58.051 "num_base_bdevs_operational": 4, 00:10:58.051 "base_bdevs_list": [ 00:10:58.051 { 00:10:58.051 "name": "BaseBdev1", 00:10:58.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.051 "is_configured": false, 00:10:58.051 "data_offset": 0, 00:10:58.051 "data_size": 0 00:10:58.051 }, 00:10:58.051 { 00:10:58.051 "name": "BaseBdev2", 00:10:58.051 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:10:58.051 "is_configured": true, 00:10:58.051 "data_offset": 0, 00:10:58.051 "data_size": 65536 00:10:58.051 }, 00:10:58.051 { 00:10:58.051 "name": "BaseBdev3", 00:10:58.051 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:10:58.051 "is_configured": true, 00:10:58.051 "data_offset": 0, 00:10:58.051 "data_size": 65536 00:10:58.051 }, 00:10:58.051 { 00:10:58.051 "name": "BaseBdev4", 00:10:58.051 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:10:58.051 "is_configured": true, 00:10:58.051 "data_offset": 0, 00:10:58.051 "data_size": 65536 00:10:58.051 } 00:10:58.051 ] 00:10:58.051 }' 00:10:58.051 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.051 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.619 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.620 [2024-09-29 21:42:17.301806] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.620 "name": "Existed_Raid", 00:10:58.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.620 "strip_size_kb": 64, 00:10:58.620 "state": "configuring", 00:10:58.620 "raid_level": "concat", 00:10:58.620 "superblock": false, 00:10:58.620 "num_base_bdevs": 4, 00:10:58.620 "num_base_bdevs_discovered": 2, 00:10:58.620 "num_base_bdevs_operational": 4, 00:10:58.620 "base_bdevs_list": [ 00:10:58.620 { 00:10:58.620 "name": "BaseBdev1", 00:10:58.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.620 "is_configured": false, 00:10:58.620 "data_offset": 0, 00:10:58.620 "data_size": 0 00:10:58.620 }, 00:10:58.620 { 00:10:58.620 "name": null, 00:10:58.620 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:10:58.620 "is_configured": false, 00:10:58.620 "data_offset": 0, 00:10:58.620 "data_size": 65536 00:10:58.620 }, 00:10:58.620 { 00:10:58.620 "name": "BaseBdev3", 00:10:58.620 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:10:58.620 "is_configured": true, 00:10:58.620 "data_offset": 0, 00:10:58.620 "data_size": 65536 00:10:58.620 }, 00:10:58.620 { 00:10:58.620 "name": "BaseBdev4", 00:10:58.620 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:10:58.620 "is_configured": true, 00:10:58.620 "data_offset": 0, 00:10:58.620 "data_size": 65536 00:10:58.620 } 00:10:58.620 ] 00:10:58.620 }' 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.620 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.879 [2024-09-29 21:42:17.846289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.879 BaseBdev1 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.879 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.139 [ 00:10:59.139 { 00:10:59.139 "name": "BaseBdev1", 00:10:59.139 "aliases": [ 00:10:59.139 "d073d483-d202-4a55-9ecf-efdf3d34c4b9" 00:10:59.139 ], 00:10:59.139 "product_name": "Malloc disk", 00:10:59.139 "block_size": 512, 00:10:59.139 "num_blocks": 65536, 00:10:59.139 "uuid": "d073d483-d202-4a55-9ecf-efdf3d34c4b9", 00:10:59.139 "assigned_rate_limits": { 00:10:59.139 "rw_ios_per_sec": 0, 00:10:59.139 "rw_mbytes_per_sec": 0, 00:10:59.139 "r_mbytes_per_sec": 0, 00:10:59.139 "w_mbytes_per_sec": 0 00:10:59.139 }, 00:10:59.139 "claimed": true, 00:10:59.139 "claim_type": "exclusive_write", 00:10:59.139 "zoned": false, 00:10:59.139 "supported_io_types": { 00:10:59.139 "read": true, 00:10:59.139 "write": true, 00:10:59.139 "unmap": true, 00:10:59.139 "flush": true, 00:10:59.139 "reset": true, 00:10:59.139 "nvme_admin": false, 00:10:59.139 "nvme_io": false, 00:10:59.139 "nvme_io_md": false, 00:10:59.139 "write_zeroes": true, 00:10:59.139 "zcopy": true, 00:10:59.139 "get_zone_info": false, 00:10:59.139 "zone_management": false, 00:10:59.139 "zone_append": false, 00:10:59.139 "compare": false, 00:10:59.139 "compare_and_write": false, 00:10:59.139 "abort": true, 00:10:59.139 "seek_hole": false, 00:10:59.139 "seek_data": false, 00:10:59.139 "copy": true, 00:10:59.139 "nvme_iov_md": false 00:10:59.139 }, 00:10:59.139 "memory_domains": [ 00:10:59.139 { 00:10:59.139 "dma_device_id": "system", 00:10:59.139 "dma_device_type": 1 00:10:59.139 }, 00:10:59.139 { 00:10:59.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.139 "dma_device_type": 2 00:10:59.139 } 00:10:59.139 ], 00:10:59.139 "driver_specific": {} 00:10:59.139 } 00:10:59.139 ] 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.139 "name": "Existed_Raid", 00:10:59.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.139 "strip_size_kb": 64, 00:10:59.139 "state": "configuring", 00:10:59.139 "raid_level": "concat", 00:10:59.139 "superblock": false, 00:10:59.139 "num_base_bdevs": 4, 00:10:59.139 "num_base_bdevs_discovered": 3, 00:10:59.139 "num_base_bdevs_operational": 4, 00:10:59.139 "base_bdevs_list": [ 00:10:59.139 { 00:10:59.139 "name": "BaseBdev1", 00:10:59.139 "uuid": "d073d483-d202-4a55-9ecf-efdf3d34c4b9", 00:10:59.139 "is_configured": true, 00:10:59.139 "data_offset": 0, 00:10:59.139 "data_size": 65536 00:10:59.139 }, 00:10:59.139 { 00:10:59.139 "name": null, 00:10:59.139 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:10:59.139 "is_configured": false, 00:10:59.139 "data_offset": 0, 00:10:59.139 "data_size": 65536 00:10:59.139 }, 00:10:59.139 { 00:10:59.139 "name": "BaseBdev3", 00:10:59.139 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:10:59.139 "is_configured": true, 00:10:59.139 "data_offset": 0, 00:10:59.139 "data_size": 65536 00:10:59.139 }, 00:10:59.139 { 00:10:59.139 "name": "BaseBdev4", 00:10:59.139 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:10:59.139 "is_configured": true, 00:10:59.139 "data_offset": 0, 00:10:59.139 "data_size": 65536 00:10:59.139 } 00:10:59.139 ] 00:10:59.139 }' 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.139 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.399 [2024-09-29 21:42:18.349454] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.399 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.659 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.659 "name": "Existed_Raid", 00:10:59.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.659 "strip_size_kb": 64, 00:10:59.659 "state": "configuring", 00:10:59.659 "raid_level": "concat", 00:10:59.659 "superblock": false, 00:10:59.659 "num_base_bdevs": 4, 00:10:59.659 "num_base_bdevs_discovered": 2, 00:10:59.659 "num_base_bdevs_operational": 4, 00:10:59.659 "base_bdevs_list": [ 00:10:59.659 { 00:10:59.659 "name": "BaseBdev1", 00:10:59.659 "uuid": "d073d483-d202-4a55-9ecf-efdf3d34c4b9", 00:10:59.659 "is_configured": true, 00:10:59.659 "data_offset": 0, 00:10:59.659 "data_size": 65536 00:10:59.659 }, 00:10:59.659 { 00:10:59.659 "name": null, 00:10:59.659 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:10:59.659 "is_configured": false, 00:10:59.659 "data_offset": 0, 00:10:59.659 "data_size": 65536 00:10:59.659 }, 00:10:59.659 { 00:10:59.659 "name": null, 00:10:59.659 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:10:59.659 "is_configured": false, 00:10:59.659 "data_offset": 0, 00:10:59.659 "data_size": 65536 00:10:59.659 }, 00:10:59.659 { 00:10:59.659 "name": "BaseBdev4", 00:10:59.659 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:10:59.659 "is_configured": true, 00:10:59.659 "data_offset": 0, 00:10:59.659 "data_size": 65536 00:10:59.659 } 00:10:59.659 ] 00:10:59.659 }' 00:10:59.659 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.659 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.919 [2024-09-29 21:42:18.804711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.919 "name": "Existed_Raid", 00:10:59.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.919 "strip_size_kb": 64, 00:10:59.919 "state": "configuring", 00:10:59.919 "raid_level": "concat", 00:10:59.919 "superblock": false, 00:10:59.919 "num_base_bdevs": 4, 00:10:59.919 "num_base_bdevs_discovered": 3, 00:10:59.919 "num_base_bdevs_operational": 4, 00:10:59.919 "base_bdevs_list": [ 00:10:59.919 { 00:10:59.919 "name": "BaseBdev1", 00:10:59.919 "uuid": "d073d483-d202-4a55-9ecf-efdf3d34c4b9", 00:10:59.919 "is_configured": true, 00:10:59.919 "data_offset": 0, 00:10:59.919 "data_size": 65536 00:10:59.919 }, 00:10:59.919 { 00:10:59.919 "name": null, 00:10:59.919 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:10:59.919 "is_configured": false, 00:10:59.919 "data_offset": 0, 00:10:59.919 "data_size": 65536 00:10:59.919 }, 00:10:59.919 { 00:10:59.919 "name": "BaseBdev3", 00:10:59.919 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:10:59.919 "is_configured": true, 00:10:59.919 "data_offset": 0, 00:10:59.919 "data_size": 65536 00:10:59.919 }, 00:10:59.919 { 00:10:59.919 "name": "BaseBdev4", 00:10:59.919 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:10:59.919 "is_configured": true, 00:10:59.919 "data_offset": 0, 00:10:59.919 "data_size": 65536 00:10:59.919 } 00:10:59.919 ] 00:10:59.919 }' 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.919 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.487 [2024-09-29 21:42:19.284297] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.487 "name": "Existed_Raid", 00:11:00.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.487 "strip_size_kb": 64, 00:11:00.487 "state": "configuring", 00:11:00.487 "raid_level": "concat", 00:11:00.487 "superblock": false, 00:11:00.487 "num_base_bdevs": 4, 00:11:00.487 "num_base_bdevs_discovered": 2, 00:11:00.487 "num_base_bdevs_operational": 4, 00:11:00.487 "base_bdevs_list": [ 00:11:00.487 { 00:11:00.487 "name": null, 00:11:00.487 "uuid": "d073d483-d202-4a55-9ecf-efdf3d34c4b9", 00:11:00.487 "is_configured": false, 00:11:00.487 "data_offset": 0, 00:11:00.487 "data_size": 65536 00:11:00.487 }, 00:11:00.487 { 00:11:00.487 "name": null, 00:11:00.487 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:11:00.487 "is_configured": false, 00:11:00.487 "data_offset": 0, 00:11:00.487 "data_size": 65536 00:11:00.487 }, 00:11:00.487 { 00:11:00.487 "name": "BaseBdev3", 00:11:00.487 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:11:00.487 "is_configured": true, 00:11:00.487 "data_offset": 0, 00:11:00.487 "data_size": 65536 00:11:00.487 }, 00:11:00.487 { 00:11:00.487 "name": "BaseBdev4", 00:11:00.487 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:11:00.487 "is_configured": true, 00:11:00.487 "data_offset": 0, 00:11:00.487 "data_size": 65536 00:11:00.487 } 00:11:00.487 ] 00:11:00.487 }' 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.487 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.113 [2024-09-29 21:42:19.831629] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.113 "name": "Existed_Raid", 00:11:01.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.113 "strip_size_kb": 64, 00:11:01.113 "state": "configuring", 00:11:01.113 "raid_level": "concat", 00:11:01.113 "superblock": false, 00:11:01.113 "num_base_bdevs": 4, 00:11:01.113 "num_base_bdevs_discovered": 3, 00:11:01.113 "num_base_bdevs_operational": 4, 00:11:01.113 "base_bdevs_list": [ 00:11:01.113 { 00:11:01.113 "name": null, 00:11:01.113 "uuid": "d073d483-d202-4a55-9ecf-efdf3d34c4b9", 00:11:01.113 "is_configured": false, 00:11:01.113 "data_offset": 0, 00:11:01.113 "data_size": 65536 00:11:01.113 }, 00:11:01.113 { 00:11:01.113 "name": "BaseBdev2", 00:11:01.113 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:11:01.113 "is_configured": true, 00:11:01.113 "data_offset": 0, 00:11:01.113 "data_size": 65536 00:11:01.113 }, 00:11:01.113 { 00:11:01.113 "name": "BaseBdev3", 00:11:01.113 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:11:01.113 "is_configured": true, 00:11:01.113 "data_offset": 0, 00:11:01.113 "data_size": 65536 00:11:01.113 }, 00:11:01.113 { 00:11:01.113 "name": "BaseBdev4", 00:11:01.113 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:11:01.113 "is_configured": true, 00:11:01.113 "data_offset": 0, 00:11:01.113 "data_size": 65536 00:11:01.113 } 00:11:01.113 ] 00:11:01.113 }' 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.113 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:01.373 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d073d483-d202-4a55-9ecf-efdf3d34c4b9 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.632 [2024-09-29 21:42:20.415831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:01.632 [2024-09-29 21:42:20.415885] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:01.632 [2024-09-29 21:42:20.415893] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:01.632 [2024-09-29 21:42:20.416216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:01.632 [2024-09-29 21:42:20.416399] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:01.632 [2024-09-29 21:42:20.416419] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:01.632 [2024-09-29 21:42:20.416696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.632 NewBaseBdev 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.632 [ 00:11:01.632 { 00:11:01.632 "name": "NewBaseBdev", 00:11:01.632 "aliases": [ 00:11:01.632 "d073d483-d202-4a55-9ecf-efdf3d34c4b9" 00:11:01.632 ], 00:11:01.632 "product_name": "Malloc disk", 00:11:01.632 "block_size": 512, 00:11:01.632 "num_blocks": 65536, 00:11:01.632 "uuid": "d073d483-d202-4a55-9ecf-efdf3d34c4b9", 00:11:01.632 "assigned_rate_limits": { 00:11:01.632 "rw_ios_per_sec": 0, 00:11:01.632 "rw_mbytes_per_sec": 0, 00:11:01.632 "r_mbytes_per_sec": 0, 00:11:01.632 "w_mbytes_per_sec": 0 00:11:01.632 }, 00:11:01.632 "claimed": true, 00:11:01.632 "claim_type": "exclusive_write", 00:11:01.632 "zoned": false, 00:11:01.632 "supported_io_types": { 00:11:01.632 "read": true, 00:11:01.632 "write": true, 00:11:01.632 "unmap": true, 00:11:01.632 "flush": true, 00:11:01.632 "reset": true, 00:11:01.632 "nvme_admin": false, 00:11:01.632 "nvme_io": false, 00:11:01.632 "nvme_io_md": false, 00:11:01.632 "write_zeroes": true, 00:11:01.632 "zcopy": true, 00:11:01.632 "get_zone_info": false, 00:11:01.632 "zone_management": false, 00:11:01.632 "zone_append": false, 00:11:01.632 "compare": false, 00:11:01.632 "compare_and_write": false, 00:11:01.632 "abort": true, 00:11:01.632 "seek_hole": false, 00:11:01.632 "seek_data": false, 00:11:01.632 "copy": true, 00:11:01.632 "nvme_iov_md": false 00:11:01.632 }, 00:11:01.632 "memory_domains": [ 00:11:01.632 { 00:11:01.632 "dma_device_id": "system", 00:11:01.632 "dma_device_type": 1 00:11:01.632 }, 00:11:01.632 { 00:11:01.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.632 "dma_device_type": 2 00:11:01.632 } 00:11:01.632 ], 00:11:01.632 "driver_specific": {} 00:11:01.632 } 00:11:01.632 ] 00:11:01.632 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.633 "name": "Existed_Raid", 00:11:01.633 "uuid": "c1be3cae-2b4c-4b98-9104-237f017d2a5d", 00:11:01.633 "strip_size_kb": 64, 00:11:01.633 "state": "online", 00:11:01.633 "raid_level": "concat", 00:11:01.633 "superblock": false, 00:11:01.633 "num_base_bdevs": 4, 00:11:01.633 "num_base_bdevs_discovered": 4, 00:11:01.633 "num_base_bdevs_operational": 4, 00:11:01.633 "base_bdevs_list": [ 00:11:01.633 { 00:11:01.633 "name": "NewBaseBdev", 00:11:01.633 "uuid": "d073d483-d202-4a55-9ecf-efdf3d34c4b9", 00:11:01.633 "is_configured": true, 00:11:01.633 "data_offset": 0, 00:11:01.633 "data_size": 65536 00:11:01.633 }, 00:11:01.633 { 00:11:01.633 "name": "BaseBdev2", 00:11:01.633 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:11:01.633 "is_configured": true, 00:11:01.633 "data_offset": 0, 00:11:01.633 "data_size": 65536 00:11:01.633 }, 00:11:01.633 { 00:11:01.633 "name": "BaseBdev3", 00:11:01.633 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:11:01.633 "is_configured": true, 00:11:01.633 "data_offset": 0, 00:11:01.633 "data_size": 65536 00:11:01.633 }, 00:11:01.633 { 00:11:01.633 "name": "BaseBdev4", 00:11:01.633 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:11:01.633 "is_configured": true, 00:11:01.633 "data_offset": 0, 00:11:01.633 "data_size": 65536 00:11:01.633 } 00:11:01.633 ] 00:11:01.633 }' 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.633 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.892 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.892 [2024-09-29 21:42:20.875437] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.151 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.151 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:02.151 "name": "Existed_Raid", 00:11:02.151 "aliases": [ 00:11:02.151 "c1be3cae-2b4c-4b98-9104-237f017d2a5d" 00:11:02.151 ], 00:11:02.151 "product_name": "Raid Volume", 00:11:02.151 "block_size": 512, 00:11:02.151 "num_blocks": 262144, 00:11:02.151 "uuid": "c1be3cae-2b4c-4b98-9104-237f017d2a5d", 00:11:02.151 "assigned_rate_limits": { 00:11:02.151 "rw_ios_per_sec": 0, 00:11:02.151 "rw_mbytes_per_sec": 0, 00:11:02.151 "r_mbytes_per_sec": 0, 00:11:02.151 "w_mbytes_per_sec": 0 00:11:02.151 }, 00:11:02.151 "claimed": false, 00:11:02.151 "zoned": false, 00:11:02.151 "supported_io_types": { 00:11:02.151 "read": true, 00:11:02.151 "write": true, 00:11:02.151 "unmap": true, 00:11:02.151 "flush": true, 00:11:02.151 "reset": true, 00:11:02.151 "nvme_admin": false, 00:11:02.151 "nvme_io": false, 00:11:02.151 "nvme_io_md": false, 00:11:02.151 "write_zeroes": true, 00:11:02.151 "zcopy": false, 00:11:02.151 "get_zone_info": false, 00:11:02.151 "zone_management": false, 00:11:02.151 "zone_append": false, 00:11:02.151 "compare": false, 00:11:02.151 "compare_and_write": false, 00:11:02.151 "abort": false, 00:11:02.151 "seek_hole": false, 00:11:02.151 "seek_data": false, 00:11:02.151 "copy": false, 00:11:02.151 "nvme_iov_md": false 00:11:02.151 }, 00:11:02.151 "memory_domains": [ 00:11:02.151 { 00:11:02.151 "dma_device_id": "system", 00:11:02.151 "dma_device_type": 1 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.151 "dma_device_type": 2 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "dma_device_id": "system", 00:11:02.151 "dma_device_type": 1 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.151 "dma_device_type": 2 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "dma_device_id": "system", 00:11:02.151 "dma_device_type": 1 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.151 "dma_device_type": 2 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "dma_device_id": "system", 00:11:02.151 "dma_device_type": 1 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.151 "dma_device_type": 2 00:11:02.151 } 00:11:02.151 ], 00:11:02.151 "driver_specific": { 00:11:02.151 "raid": { 00:11:02.151 "uuid": "c1be3cae-2b4c-4b98-9104-237f017d2a5d", 00:11:02.151 "strip_size_kb": 64, 00:11:02.151 "state": "online", 00:11:02.151 "raid_level": "concat", 00:11:02.151 "superblock": false, 00:11:02.151 "num_base_bdevs": 4, 00:11:02.151 "num_base_bdevs_discovered": 4, 00:11:02.151 "num_base_bdevs_operational": 4, 00:11:02.151 "base_bdevs_list": [ 00:11:02.151 { 00:11:02.151 "name": "NewBaseBdev", 00:11:02.151 "uuid": "d073d483-d202-4a55-9ecf-efdf3d34c4b9", 00:11:02.151 "is_configured": true, 00:11:02.151 "data_offset": 0, 00:11:02.151 "data_size": 65536 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "name": "BaseBdev2", 00:11:02.151 "uuid": "0e74d3c1-3a4a-4cbf-b705-f0bf1169c757", 00:11:02.151 "is_configured": true, 00:11:02.151 "data_offset": 0, 00:11:02.151 "data_size": 65536 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "name": "BaseBdev3", 00:11:02.151 "uuid": "7a371522-78ac-47c5-b211-fb1a30f82cc6", 00:11:02.151 "is_configured": true, 00:11:02.151 "data_offset": 0, 00:11:02.151 "data_size": 65536 00:11:02.151 }, 00:11:02.151 { 00:11:02.151 "name": "BaseBdev4", 00:11:02.151 "uuid": "f423be39-6282-4220-a68d-52d0e6ad5d8a", 00:11:02.151 "is_configured": true, 00:11:02.151 "data_offset": 0, 00:11:02.151 "data_size": 65536 00:11:02.151 } 00:11:02.151 ] 00:11:02.151 } 00:11:02.151 } 00:11:02.151 }' 00:11:02.151 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:02.152 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:02.152 BaseBdev2 00:11:02.152 BaseBdev3 00:11:02.152 BaseBdev4' 00:11:02.152 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.152 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:02.152 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.152 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:02.152 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.152 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.152 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:02.152 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.411 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.411 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:02.411 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:02.411 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.411 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.411 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.411 [2024-09-29 21:42:21.178587] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.411 [2024-09-29 21:42:21.178618] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.411 [2024-09-29 21:42:21.178689] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.412 [2024-09-29 21:42:21.178766] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.412 [2024-09-29 21:42:21.178781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71363 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71363 ']' 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71363 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71363 00:11:02.412 killing process with pid 71363 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71363' 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71363 00:11:02.412 [2024-09-29 21:42:21.229326] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.412 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71363 00:11:02.670 [2024-09-29 21:42:21.642664] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:04.049 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:04.049 00:11:04.049 real 0m11.900s 00:11:04.049 user 0m18.595s 00:11:04.049 sys 0m2.189s 00:11:04.049 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:04.049 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.049 ************************************ 00:11:04.049 END TEST raid_state_function_test 00:11:04.049 ************************************ 00:11:04.049 21:42:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:04.049 21:42:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:04.049 21:42:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.049 21:42:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.309 ************************************ 00:11:04.309 START TEST raid_state_function_test_sb 00:11:04.309 ************************************ 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72036 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72036' 00:11:04.309 Process raid pid: 72036 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72036 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72036 ']' 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.309 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.309 [2024-09-29 21:42:23.141702] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:04.309 [2024-09-29 21:42:23.141843] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.568 [2024-09-29 21:42:23.308137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.828 [2024-09-29 21:42:23.554862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.828 [2024-09-29 21:42:23.781999] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.828 [2024-09-29 21:42:23.782047] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.086 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.087 [2024-09-29 21:42:23.965469] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.087 [2024-09-29 21:42:23.965541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.087 [2024-09-29 21:42:23.965550] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.087 [2024-09-29 21:42:23.965560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.087 [2024-09-29 21:42:23.965566] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.087 [2024-09-29 21:42:23.965574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.087 [2024-09-29 21:42:23.965579] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.087 [2024-09-29 21:42:23.965590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.087 21:42:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.087 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.087 "name": "Existed_Raid", 00:11:05.087 "uuid": "22cbcd94-c0c0-4254-9d6a-5ff1c454c387", 00:11:05.087 "strip_size_kb": 64, 00:11:05.087 "state": "configuring", 00:11:05.087 "raid_level": "concat", 00:11:05.087 "superblock": true, 00:11:05.087 "num_base_bdevs": 4, 00:11:05.087 "num_base_bdevs_discovered": 0, 00:11:05.087 "num_base_bdevs_operational": 4, 00:11:05.087 "base_bdevs_list": [ 00:11:05.087 { 00:11:05.087 "name": "BaseBdev1", 00:11:05.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.087 "is_configured": false, 00:11:05.087 "data_offset": 0, 00:11:05.087 "data_size": 0 00:11:05.087 }, 00:11:05.087 { 00:11:05.087 "name": "BaseBdev2", 00:11:05.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.087 "is_configured": false, 00:11:05.087 "data_offset": 0, 00:11:05.087 "data_size": 0 00:11:05.087 }, 00:11:05.087 { 00:11:05.087 "name": "BaseBdev3", 00:11:05.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.087 "is_configured": false, 00:11:05.087 "data_offset": 0, 00:11:05.087 "data_size": 0 00:11:05.087 }, 00:11:05.087 { 00:11:05.087 "name": "BaseBdev4", 00:11:05.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.087 "is_configured": false, 00:11:05.087 "data_offset": 0, 00:11:05.087 "data_size": 0 00:11:05.087 } 00:11:05.087 ] 00:11:05.087 }' 00:11:05.087 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.087 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.655 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.655 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.655 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.655 [2024-09-29 21:42:24.456521] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.655 [2024-09-29 21:42:24.456565] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:05.655 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.656 [2024-09-29 21:42:24.468544] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.656 [2024-09-29 21:42:24.468586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.656 [2024-09-29 21:42:24.468594] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.656 [2024-09-29 21:42:24.468604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.656 [2024-09-29 21:42:24.468610] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.656 [2024-09-29 21:42:24.468619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.656 [2024-09-29 21:42:24.468625] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.656 [2024-09-29 21:42:24.468634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.656 BaseBdev1 00:11:05.656 [2024-09-29 21:42:24.541229] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.656 [ 00:11:05.656 { 00:11:05.656 "name": "BaseBdev1", 00:11:05.656 "aliases": [ 00:11:05.656 "8e687a98-29a1-401e-b811-6333c2fb1436" 00:11:05.656 ], 00:11:05.656 "product_name": "Malloc disk", 00:11:05.656 "block_size": 512, 00:11:05.656 "num_blocks": 65536, 00:11:05.656 "uuid": "8e687a98-29a1-401e-b811-6333c2fb1436", 00:11:05.656 "assigned_rate_limits": { 00:11:05.656 "rw_ios_per_sec": 0, 00:11:05.656 "rw_mbytes_per_sec": 0, 00:11:05.656 "r_mbytes_per_sec": 0, 00:11:05.656 "w_mbytes_per_sec": 0 00:11:05.656 }, 00:11:05.656 "claimed": true, 00:11:05.656 "claim_type": "exclusive_write", 00:11:05.656 "zoned": false, 00:11:05.656 "supported_io_types": { 00:11:05.656 "read": true, 00:11:05.656 "write": true, 00:11:05.656 "unmap": true, 00:11:05.656 "flush": true, 00:11:05.656 "reset": true, 00:11:05.656 "nvme_admin": false, 00:11:05.656 "nvme_io": false, 00:11:05.656 "nvme_io_md": false, 00:11:05.656 "write_zeroes": true, 00:11:05.656 "zcopy": true, 00:11:05.656 "get_zone_info": false, 00:11:05.656 "zone_management": false, 00:11:05.656 "zone_append": false, 00:11:05.656 "compare": false, 00:11:05.656 "compare_and_write": false, 00:11:05.656 "abort": true, 00:11:05.656 "seek_hole": false, 00:11:05.656 "seek_data": false, 00:11:05.656 "copy": true, 00:11:05.656 "nvme_iov_md": false 00:11:05.656 }, 00:11:05.656 "memory_domains": [ 00:11:05.656 { 00:11:05.656 "dma_device_id": "system", 00:11:05.656 "dma_device_type": 1 00:11:05.656 }, 00:11:05.656 { 00:11:05.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.656 "dma_device_type": 2 00:11:05.656 } 00:11:05.656 ], 00:11:05.656 "driver_specific": {} 00:11:05.656 } 00:11:05.656 ] 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.656 "name": "Existed_Raid", 00:11:05.656 "uuid": "c4f86283-47e5-47d4-8d0f-b6f5f24c2a8c", 00:11:05.656 "strip_size_kb": 64, 00:11:05.656 "state": "configuring", 00:11:05.656 "raid_level": "concat", 00:11:05.656 "superblock": true, 00:11:05.656 "num_base_bdevs": 4, 00:11:05.656 "num_base_bdevs_discovered": 1, 00:11:05.656 "num_base_bdevs_operational": 4, 00:11:05.656 "base_bdevs_list": [ 00:11:05.656 { 00:11:05.656 "name": "BaseBdev1", 00:11:05.656 "uuid": "8e687a98-29a1-401e-b811-6333c2fb1436", 00:11:05.656 "is_configured": true, 00:11:05.656 "data_offset": 2048, 00:11:05.656 "data_size": 63488 00:11:05.656 }, 00:11:05.656 { 00:11:05.656 "name": "BaseBdev2", 00:11:05.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.656 "is_configured": false, 00:11:05.656 "data_offset": 0, 00:11:05.656 "data_size": 0 00:11:05.656 }, 00:11:05.656 { 00:11:05.656 "name": "BaseBdev3", 00:11:05.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.656 "is_configured": false, 00:11:05.656 "data_offset": 0, 00:11:05.656 "data_size": 0 00:11:05.656 }, 00:11:05.656 { 00:11:05.656 "name": "BaseBdev4", 00:11:05.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.656 "is_configured": false, 00:11:05.656 "data_offset": 0, 00:11:05.656 "data_size": 0 00:11:05.656 } 00:11:05.656 ] 00:11:05.656 }' 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.656 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.225 [2024-09-29 21:42:25.024393] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.225 [2024-09-29 21:42:25.024439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.225 [2024-09-29 21:42:25.036439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.225 [2024-09-29 21:42:25.038486] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:06.225 [2024-09-29 21:42:25.038529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:06.225 [2024-09-29 21:42:25.038538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:06.225 [2024-09-29 21:42:25.038548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:06.225 [2024-09-29 21:42:25.038554] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:06.225 [2024-09-29 21:42:25.038562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.225 "name": "Existed_Raid", 00:11:06.225 "uuid": "585cbd02-2ea9-475a-a007-e2678df226d2", 00:11:06.225 "strip_size_kb": 64, 00:11:06.225 "state": "configuring", 00:11:06.225 "raid_level": "concat", 00:11:06.225 "superblock": true, 00:11:06.225 "num_base_bdevs": 4, 00:11:06.225 "num_base_bdevs_discovered": 1, 00:11:06.225 "num_base_bdevs_operational": 4, 00:11:06.225 "base_bdevs_list": [ 00:11:06.225 { 00:11:06.225 "name": "BaseBdev1", 00:11:06.225 "uuid": "8e687a98-29a1-401e-b811-6333c2fb1436", 00:11:06.225 "is_configured": true, 00:11:06.225 "data_offset": 2048, 00:11:06.225 "data_size": 63488 00:11:06.225 }, 00:11:06.225 { 00:11:06.225 "name": "BaseBdev2", 00:11:06.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.225 "is_configured": false, 00:11:06.225 "data_offset": 0, 00:11:06.225 "data_size": 0 00:11:06.225 }, 00:11:06.225 { 00:11:06.225 "name": "BaseBdev3", 00:11:06.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.225 "is_configured": false, 00:11:06.225 "data_offset": 0, 00:11:06.225 "data_size": 0 00:11:06.225 }, 00:11:06.225 { 00:11:06.225 "name": "BaseBdev4", 00:11:06.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.225 "is_configured": false, 00:11:06.225 "data_offset": 0, 00:11:06.225 "data_size": 0 00:11:06.225 } 00:11:06.225 ] 00:11:06.225 }' 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.225 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.794 [2024-09-29 21:42:25.544323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.794 BaseBdev2 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.794 [ 00:11:06.794 { 00:11:06.794 "name": "BaseBdev2", 00:11:06.794 "aliases": [ 00:11:06.794 "58849c43-e6b9-4383-9d49-b4997591eac0" 00:11:06.794 ], 00:11:06.794 "product_name": "Malloc disk", 00:11:06.794 "block_size": 512, 00:11:06.794 "num_blocks": 65536, 00:11:06.794 "uuid": "58849c43-e6b9-4383-9d49-b4997591eac0", 00:11:06.794 "assigned_rate_limits": { 00:11:06.794 "rw_ios_per_sec": 0, 00:11:06.794 "rw_mbytes_per_sec": 0, 00:11:06.794 "r_mbytes_per_sec": 0, 00:11:06.794 "w_mbytes_per_sec": 0 00:11:06.794 }, 00:11:06.794 "claimed": true, 00:11:06.794 "claim_type": "exclusive_write", 00:11:06.794 "zoned": false, 00:11:06.794 "supported_io_types": { 00:11:06.794 "read": true, 00:11:06.794 "write": true, 00:11:06.794 "unmap": true, 00:11:06.794 "flush": true, 00:11:06.794 "reset": true, 00:11:06.794 "nvme_admin": false, 00:11:06.794 "nvme_io": false, 00:11:06.794 "nvme_io_md": false, 00:11:06.794 "write_zeroes": true, 00:11:06.794 "zcopy": true, 00:11:06.794 "get_zone_info": false, 00:11:06.794 "zone_management": false, 00:11:06.794 "zone_append": false, 00:11:06.794 "compare": false, 00:11:06.794 "compare_and_write": false, 00:11:06.794 "abort": true, 00:11:06.794 "seek_hole": false, 00:11:06.794 "seek_data": false, 00:11:06.794 "copy": true, 00:11:06.794 "nvme_iov_md": false 00:11:06.794 }, 00:11:06.794 "memory_domains": [ 00:11:06.794 { 00:11:06.794 "dma_device_id": "system", 00:11:06.794 "dma_device_type": 1 00:11:06.794 }, 00:11:06.794 { 00:11:06.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.794 "dma_device_type": 2 00:11:06.794 } 00:11:06.794 ], 00:11:06.794 "driver_specific": {} 00:11:06.794 } 00:11:06.794 ] 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.794 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.795 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.795 "name": "Existed_Raid", 00:11:06.795 "uuid": "585cbd02-2ea9-475a-a007-e2678df226d2", 00:11:06.795 "strip_size_kb": 64, 00:11:06.795 "state": "configuring", 00:11:06.795 "raid_level": "concat", 00:11:06.795 "superblock": true, 00:11:06.795 "num_base_bdevs": 4, 00:11:06.795 "num_base_bdevs_discovered": 2, 00:11:06.795 "num_base_bdevs_operational": 4, 00:11:06.795 "base_bdevs_list": [ 00:11:06.795 { 00:11:06.795 "name": "BaseBdev1", 00:11:06.795 "uuid": "8e687a98-29a1-401e-b811-6333c2fb1436", 00:11:06.795 "is_configured": true, 00:11:06.795 "data_offset": 2048, 00:11:06.795 "data_size": 63488 00:11:06.795 }, 00:11:06.795 { 00:11:06.795 "name": "BaseBdev2", 00:11:06.795 "uuid": "58849c43-e6b9-4383-9d49-b4997591eac0", 00:11:06.795 "is_configured": true, 00:11:06.795 "data_offset": 2048, 00:11:06.795 "data_size": 63488 00:11:06.795 }, 00:11:06.795 { 00:11:06.795 "name": "BaseBdev3", 00:11:06.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.795 "is_configured": false, 00:11:06.795 "data_offset": 0, 00:11:06.795 "data_size": 0 00:11:06.795 }, 00:11:06.795 { 00:11:06.795 "name": "BaseBdev4", 00:11:06.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.795 "is_configured": false, 00:11:06.795 "data_offset": 0, 00:11:06.795 "data_size": 0 00:11:06.795 } 00:11:06.795 ] 00:11:06.795 }' 00:11:06.795 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.795 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.054 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:07.054 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.054 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.313 [2024-09-29 21:42:26.047703] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.313 BaseBdev3 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.313 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.314 [ 00:11:07.314 { 00:11:07.314 "name": "BaseBdev3", 00:11:07.314 "aliases": [ 00:11:07.314 "32b87989-16d3-4238-bcd9-200c9b6e93cb" 00:11:07.314 ], 00:11:07.314 "product_name": "Malloc disk", 00:11:07.314 "block_size": 512, 00:11:07.314 "num_blocks": 65536, 00:11:07.314 "uuid": "32b87989-16d3-4238-bcd9-200c9b6e93cb", 00:11:07.314 "assigned_rate_limits": { 00:11:07.314 "rw_ios_per_sec": 0, 00:11:07.314 "rw_mbytes_per_sec": 0, 00:11:07.314 "r_mbytes_per_sec": 0, 00:11:07.314 "w_mbytes_per_sec": 0 00:11:07.314 }, 00:11:07.314 "claimed": true, 00:11:07.314 "claim_type": "exclusive_write", 00:11:07.314 "zoned": false, 00:11:07.314 "supported_io_types": { 00:11:07.314 "read": true, 00:11:07.314 "write": true, 00:11:07.314 "unmap": true, 00:11:07.314 "flush": true, 00:11:07.314 "reset": true, 00:11:07.314 "nvme_admin": false, 00:11:07.314 "nvme_io": false, 00:11:07.314 "nvme_io_md": false, 00:11:07.314 "write_zeroes": true, 00:11:07.314 "zcopy": true, 00:11:07.314 "get_zone_info": false, 00:11:07.314 "zone_management": false, 00:11:07.314 "zone_append": false, 00:11:07.314 "compare": false, 00:11:07.314 "compare_and_write": false, 00:11:07.314 "abort": true, 00:11:07.314 "seek_hole": false, 00:11:07.314 "seek_data": false, 00:11:07.314 "copy": true, 00:11:07.314 "nvme_iov_md": false 00:11:07.314 }, 00:11:07.314 "memory_domains": [ 00:11:07.314 { 00:11:07.314 "dma_device_id": "system", 00:11:07.314 "dma_device_type": 1 00:11:07.314 }, 00:11:07.314 { 00:11:07.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.314 "dma_device_type": 2 00:11:07.314 } 00:11:07.314 ], 00:11:07.314 "driver_specific": {} 00:11:07.314 } 00:11:07.314 ] 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.314 "name": "Existed_Raid", 00:11:07.314 "uuid": "585cbd02-2ea9-475a-a007-e2678df226d2", 00:11:07.314 "strip_size_kb": 64, 00:11:07.314 "state": "configuring", 00:11:07.314 "raid_level": "concat", 00:11:07.314 "superblock": true, 00:11:07.314 "num_base_bdevs": 4, 00:11:07.314 "num_base_bdevs_discovered": 3, 00:11:07.314 "num_base_bdevs_operational": 4, 00:11:07.314 "base_bdevs_list": [ 00:11:07.314 { 00:11:07.314 "name": "BaseBdev1", 00:11:07.314 "uuid": "8e687a98-29a1-401e-b811-6333c2fb1436", 00:11:07.314 "is_configured": true, 00:11:07.314 "data_offset": 2048, 00:11:07.314 "data_size": 63488 00:11:07.314 }, 00:11:07.314 { 00:11:07.314 "name": "BaseBdev2", 00:11:07.314 "uuid": "58849c43-e6b9-4383-9d49-b4997591eac0", 00:11:07.314 "is_configured": true, 00:11:07.314 "data_offset": 2048, 00:11:07.314 "data_size": 63488 00:11:07.314 }, 00:11:07.314 { 00:11:07.314 "name": "BaseBdev3", 00:11:07.314 "uuid": "32b87989-16d3-4238-bcd9-200c9b6e93cb", 00:11:07.314 "is_configured": true, 00:11:07.314 "data_offset": 2048, 00:11:07.314 "data_size": 63488 00:11:07.314 }, 00:11:07.314 { 00:11:07.314 "name": "BaseBdev4", 00:11:07.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.314 "is_configured": false, 00:11:07.314 "data_offset": 0, 00:11:07.314 "data_size": 0 00:11:07.314 } 00:11:07.314 ] 00:11:07.314 }' 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.314 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.573 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:07.573 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.573 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.833 [2024-09-29 21:42:26.596902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:07.833 [2024-09-29 21:42:26.597220] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:07.833 [2024-09-29 21:42:26.597245] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:07.833 [2024-09-29 21:42:26.597547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:07.833 [2024-09-29 21:42:26.597716] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:07.833 [2024-09-29 21:42:26.597737] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:07.833 [2024-09-29 21:42:26.597884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.833 BaseBdev4 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.833 [ 00:11:07.833 { 00:11:07.833 "name": "BaseBdev4", 00:11:07.833 "aliases": [ 00:11:07.833 "b86c7725-88f5-4f05-a7ee-e5b4c9dff9e1" 00:11:07.833 ], 00:11:07.833 "product_name": "Malloc disk", 00:11:07.833 "block_size": 512, 00:11:07.833 "num_blocks": 65536, 00:11:07.833 "uuid": "b86c7725-88f5-4f05-a7ee-e5b4c9dff9e1", 00:11:07.833 "assigned_rate_limits": { 00:11:07.833 "rw_ios_per_sec": 0, 00:11:07.833 "rw_mbytes_per_sec": 0, 00:11:07.833 "r_mbytes_per_sec": 0, 00:11:07.833 "w_mbytes_per_sec": 0 00:11:07.833 }, 00:11:07.833 "claimed": true, 00:11:07.833 "claim_type": "exclusive_write", 00:11:07.833 "zoned": false, 00:11:07.833 "supported_io_types": { 00:11:07.833 "read": true, 00:11:07.833 "write": true, 00:11:07.833 "unmap": true, 00:11:07.833 "flush": true, 00:11:07.833 "reset": true, 00:11:07.833 "nvme_admin": false, 00:11:07.833 "nvme_io": false, 00:11:07.833 "nvme_io_md": false, 00:11:07.833 "write_zeroes": true, 00:11:07.833 "zcopy": true, 00:11:07.833 "get_zone_info": false, 00:11:07.833 "zone_management": false, 00:11:07.833 "zone_append": false, 00:11:07.833 "compare": false, 00:11:07.833 "compare_and_write": false, 00:11:07.833 "abort": true, 00:11:07.833 "seek_hole": false, 00:11:07.833 "seek_data": false, 00:11:07.833 "copy": true, 00:11:07.833 "nvme_iov_md": false 00:11:07.833 }, 00:11:07.833 "memory_domains": [ 00:11:07.833 { 00:11:07.833 "dma_device_id": "system", 00:11:07.833 "dma_device_type": 1 00:11:07.833 }, 00:11:07.833 { 00:11:07.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.833 "dma_device_type": 2 00:11:07.833 } 00:11:07.833 ], 00:11:07.833 "driver_specific": {} 00:11:07.833 } 00:11:07.833 ] 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.833 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.834 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.834 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.834 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.834 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.834 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.834 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.834 "name": "Existed_Raid", 00:11:07.834 "uuid": "585cbd02-2ea9-475a-a007-e2678df226d2", 00:11:07.834 "strip_size_kb": 64, 00:11:07.834 "state": "online", 00:11:07.834 "raid_level": "concat", 00:11:07.834 "superblock": true, 00:11:07.834 "num_base_bdevs": 4, 00:11:07.834 "num_base_bdevs_discovered": 4, 00:11:07.834 "num_base_bdevs_operational": 4, 00:11:07.834 "base_bdevs_list": [ 00:11:07.834 { 00:11:07.834 "name": "BaseBdev1", 00:11:07.834 "uuid": "8e687a98-29a1-401e-b811-6333c2fb1436", 00:11:07.834 "is_configured": true, 00:11:07.834 "data_offset": 2048, 00:11:07.834 "data_size": 63488 00:11:07.834 }, 00:11:07.834 { 00:11:07.834 "name": "BaseBdev2", 00:11:07.834 "uuid": "58849c43-e6b9-4383-9d49-b4997591eac0", 00:11:07.834 "is_configured": true, 00:11:07.834 "data_offset": 2048, 00:11:07.834 "data_size": 63488 00:11:07.834 }, 00:11:07.834 { 00:11:07.834 "name": "BaseBdev3", 00:11:07.834 "uuid": "32b87989-16d3-4238-bcd9-200c9b6e93cb", 00:11:07.834 "is_configured": true, 00:11:07.834 "data_offset": 2048, 00:11:07.834 "data_size": 63488 00:11:07.834 }, 00:11:07.834 { 00:11:07.834 "name": "BaseBdev4", 00:11:07.834 "uuid": "b86c7725-88f5-4f05-a7ee-e5b4c9dff9e1", 00:11:07.834 "is_configured": true, 00:11:07.834 "data_offset": 2048, 00:11:07.834 "data_size": 63488 00:11:07.834 } 00:11:07.834 ] 00:11:07.834 }' 00:11:07.834 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.834 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.093 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:08.093 [2024-09-29 21:42:27.068454] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:08.352 "name": "Existed_Raid", 00:11:08.352 "aliases": [ 00:11:08.352 "585cbd02-2ea9-475a-a007-e2678df226d2" 00:11:08.352 ], 00:11:08.352 "product_name": "Raid Volume", 00:11:08.352 "block_size": 512, 00:11:08.352 "num_blocks": 253952, 00:11:08.352 "uuid": "585cbd02-2ea9-475a-a007-e2678df226d2", 00:11:08.352 "assigned_rate_limits": { 00:11:08.352 "rw_ios_per_sec": 0, 00:11:08.352 "rw_mbytes_per_sec": 0, 00:11:08.352 "r_mbytes_per_sec": 0, 00:11:08.352 "w_mbytes_per_sec": 0 00:11:08.352 }, 00:11:08.352 "claimed": false, 00:11:08.352 "zoned": false, 00:11:08.352 "supported_io_types": { 00:11:08.352 "read": true, 00:11:08.352 "write": true, 00:11:08.352 "unmap": true, 00:11:08.352 "flush": true, 00:11:08.352 "reset": true, 00:11:08.352 "nvme_admin": false, 00:11:08.352 "nvme_io": false, 00:11:08.352 "nvme_io_md": false, 00:11:08.352 "write_zeroes": true, 00:11:08.352 "zcopy": false, 00:11:08.352 "get_zone_info": false, 00:11:08.352 "zone_management": false, 00:11:08.352 "zone_append": false, 00:11:08.352 "compare": false, 00:11:08.352 "compare_and_write": false, 00:11:08.352 "abort": false, 00:11:08.352 "seek_hole": false, 00:11:08.352 "seek_data": false, 00:11:08.352 "copy": false, 00:11:08.352 "nvme_iov_md": false 00:11:08.352 }, 00:11:08.352 "memory_domains": [ 00:11:08.352 { 00:11:08.352 "dma_device_id": "system", 00:11:08.352 "dma_device_type": 1 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.352 "dma_device_type": 2 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "dma_device_id": "system", 00:11:08.352 "dma_device_type": 1 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.352 "dma_device_type": 2 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "dma_device_id": "system", 00:11:08.352 "dma_device_type": 1 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.352 "dma_device_type": 2 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "dma_device_id": "system", 00:11:08.352 "dma_device_type": 1 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.352 "dma_device_type": 2 00:11:08.352 } 00:11:08.352 ], 00:11:08.352 "driver_specific": { 00:11:08.352 "raid": { 00:11:08.352 "uuid": "585cbd02-2ea9-475a-a007-e2678df226d2", 00:11:08.352 "strip_size_kb": 64, 00:11:08.352 "state": "online", 00:11:08.352 "raid_level": "concat", 00:11:08.352 "superblock": true, 00:11:08.352 "num_base_bdevs": 4, 00:11:08.352 "num_base_bdevs_discovered": 4, 00:11:08.352 "num_base_bdevs_operational": 4, 00:11:08.352 "base_bdevs_list": [ 00:11:08.352 { 00:11:08.352 "name": "BaseBdev1", 00:11:08.352 "uuid": "8e687a98-29a1-401e-b811-6333c2fb1436", 00:11:08.352 "is_configured": true, 00:11:08.352 "data_offset": 2048, 00:11:08.352 "data_size": 63488 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "name": "BaseBdev2", 00:11:08.352 "uuid": "58849c43-e6b9-4383-9d49-b4997591eac0", 00:11:08.352 "is_configured": true, 00:11:08.352 "data_offset": 2048, 00:11:08.352 "data_size": 63488 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "name": "BaseBdev3", 00:11:08.352 "uuid": "32b87989-16d3-4238-bcd9-200c9b6e93cb", 00:11:08.352 "is_configured": true, 00:11:08.352 "data_offset": 2048, 00:11:08.352 "data_size": 63488 00:11:08.352 }, 00:11:08.352 { 00:11:08.352 "name": "BaseBdev4", 00:11:08.352 "uuid": "b86c7725-88f5-4f05-a7ee-e5b4c9dff9e1", 00:11:08.352 "is_configured": true, 00:11:08.352 "data_offset": 2048, 00:11:08.352 "data_size": 63488 00:11:08.352 } 00:11:08.352 ] 00:11:08.352 } 00:11:08.352 } 00:11:08.352 }' 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:08.352 BaseBdev2 00:11:08.352 BaseBdev3 00:11:08.352 BaseBdev4' 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.352 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.353 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.612 [2024-09-29 21:42:27.367653] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.612 [2024-09-29 21:42:27.367688] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.612 [2024-09-29 21:42:27.367734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.612 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.613 "name": "Existed_Raid", 00:11:08.613 "uuid": "585cbd02-2ea9-475a-a007-e2678df226d2", 00:11:08.613 "strip_size_kb": 64, 00:11:08.613 "state": "offline", 00:11:08.613 "raid_level": "concat", 00:11:08.613 "superblock": true, 00:11:08.613 "num_base_bdevs": 4, 00:11:08.613 "num_base_bdevs_discovered": 3, 00:11:08.613 "num_base_bdevs_operational": 3, 00:11:08.613 "base_bdevs_list": [ 00:11:08.613 { 00:11:08.613 "name": null, 00:11:08.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.613 "is_configured": false, 00:11:08.613 "data_offset": 0, 00:11:08.613 "data_size": 63488 00:11:08.613 }, 00:11:08.613 { 00:11:08.613 "name": "BaseBdev2", 00:11:08.613 "uuid": "58849c43-e6b9-4383-9d49-b4997591eac0", 00:11:08.613 "is_configured": true, 00:11:08.613 "data_offset": 2048, 00:11:08.613 "data_size": 63488 00:11:08.613 }, 00:11:08.613 { 00:11:08.613 "name": "BaseBdev3", 00:11:08.613 "uuid": "32b87989-16d3-4238-bcd9-200c9b6e93cb", 00:11:08.613 "is_configured": true, 00:11:08.613 "data_offset": 2048, 00:11:08.613 "data_size": 63488 00:11:08.613 }, 00:11:08.613 { 00:11:08.613 "name": "BaseBdev4", 00:11:08.613 "uuid": "b86c7725-88f5-4f05-a7ee-e5b4c9dff9e1", 00:11:08.613 "is_configured": true, 00:11:08.613 "data_offset": 2048, 00:11:08.613 "data_size": 63488 00:11:08.613 } 00:11:08.613 ] 00:11:08.613 }' 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.613 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.182 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.182 [2024-09-29 21:42:27.971301] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.182 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.182 [2024-09-29 21:42:28.129733] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:09.441 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.441 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.441 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.441 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.442 [2024-09-29 21:42:28.289209] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:09.442 [2024-09-29 21:42:28.289272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.442 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.702 BaseBdev2 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.702 [ 00:11:09.702 { 00:11:09.702 "name": "BaseBdev2", 00:11:09.702 "aliases": [ 00:11:09.702 "a4b5db73-173e-4f16-afb0-ab50433de889" 00:11:09.702 ], 00:11:09.702 "product_name": "Malloc disk", 00:11:09.702 "block_size": 512, 00:11:09.702 "num_blocks": 65536, 00:11:09.702 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:09.702 "assigned_rate_limits": { 00:11:09.702 "rw_ios_per_sec": 0, 00:11:09.702 "rw_mbytes_per_sec": 0, 00:11:09.702 "r_mbytes_per_sec": 0, 00:11:09.702 "w_mbytes_per_sec": 0 00:11:09.702 }, 00:11:09.702 "claimed": false, 00:11:09.702 "zoned": false, 00:11:09.702 "supported_io_types": { 00:11:09.702 "read": true, 00:11:09.702 "write": true, 00:11:09.702 "unmap": true, 00:11:09.702 "flush": true, 00:11:09.702 "reset": true, 00:11:09.702 "nvme_admin": false, 00:11:09.702 "nvme_io": false, 00:11:09.702 "nvme_io_md": false, 00:11:09.702 "write_zeroes": true, 00:11:09.702 "zcopy": true, 00:11:09.702 "get_zone_info": false, 00:11:09.702 "zone_management": false, 00:11:09.702 "zone_append": false, 00:11:09.702 "compare": false, 00:11:09.702 "compare_and_write": false, 00:11:09.702 "abort": true, 00:11:09.702 "seek_hole": false, 00:11:09.702 "seek_data": false, 00:11:09.702 "copy": true, 00:11:09.702 "nvme_iov_md": false 00:11:09.702 }, 00:11:09.702 "memory_domains": [ 00:11:09.702 { 00:11:09.702 "dma_device_id": "system", 00:11:09.702 "dma_device_type": 1 00:11:09.702 }, 00:11:09.702 { 00:11:09.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.702 "dma_device_type": 2 00:11:09.702 } 00:11:09.702 ], 00:11:09.702 "driver_specific": {} 00:11:09.702 } 00:11:09.702 ] 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.702 BaseBdev3 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.702 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.702 [ 00:11:09.702 { 00:11:09.702 "name": "BaseBdev3", 00:11:09.702 "aliases": [ 00:11:09.702 "80257c0c-1808-4913-955f-22f753764244" 00:11:09.702 ], 00:11:09.702 "product_name": "Malloc disk", 00:11:09.702 "block_size": 512, 00:11:09.702 "num_blocks": 65536, 00:11:09.702 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:09.702 "assigned_rate_limits": { 00:11:09.702 "rw_ios_per_sec": 0, 00:11:09.702 "rw_mbytes_per_sec": 0, 00:11:09.702 "r_mbytes_per_sec": 0, 00:11:09.702 "w_mbytes_per_sec": 0 00:11:09.702 }, 00:11:09.702 "claimed": false, 00:11:09.702 "zoned": false, 00:11:09.702 "supported_io_types": { 00:11:09.702 "read": true, 00:11:09.702 "write": true, 00:11:09.702 "unmap": true, 00:11:09.702 "flush": true, 00:11:09.702 "reset": true, 00:11:09.702 "nvme_admin": false, 00:11:09.702 "nvme_io": false, 00:11:09.703 "nvme_io_md": false, 00:11:09.703 "write_zeroes": true, 00:11:09.703 "zcopy": true, 00:11:09.703 "get_zone_info": false, 00:11:09.703 "zone_management": false, 00:11:09.703 "zone_append": false, 00:11:09.703 "compare": false, 00:11:09.703 "compare_and_write": false, 00:11:09.703 "abort": true, 00:11:09.703 "seek_hole": false, 00:11:09.703 "seek_data": false, 00:11:09.703 "copy": true, 00:11:09.703 "nvme_iov_md": false 00:11:09.703 }, 00:11:09.703 "memory_domains": [ 00:11:09.703 { 00:11:09.703 "dma_device_id": "system", 00:11:09.703 "dma_device_type": 1 00:11:09.703 }, 00:11:09.703 { 00:11:09.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.703 "dma_device_type": 2 00:11:09.703 } 00:11:09.703 ], 00:11:09.703 "driver_specific": {} 00:11:09.703 } 00:11:09.703 ] 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.703 BaseBdev4 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.703 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.703 [ 00:11:09.703 { 00:11:09.703 "name": "BaseBdev4", 00:11:09.703 "aliases": [ 00:11:09.703 "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9" 00:11:09.703 ], 00:11:09.703 "product_name": "Malloc disk", 00:11:09.703 "block_size": 512, 00:11:09.703 "num_blocks": 65536, 00:11:09.703 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:09.703 "assigned_rate_limits": { 00:11:09.703 "rw_ios_per_sec": 0, 00:11:09.703 "rw_mbytes_per_sec": 0, 00:11:09.703 "r_mbytes_per_sec": 0, 00:11:09.703 "w_mbytes_per_sec": 0 00:11:09.703 }, 00:11:09.703 "claimed": false, 00:11:09.703 "zoned": false, 00:11:09.703 "supported_io_types": { 00:11:09.703 "read": true, 00:11:09.703 "write": true, 00:11:09.703 "unmap": true, 00:11:09.703 "flush": true, 00:11:09.703 "reset": true, 00:11:09.703 "nvme_admin": false, 00:11:09.703 "nvme_io": false, 00:11:09.703 "nvme_io_md": false, 00:11:09.703 "write_zeroes": true, 00:11:09.703 "zcopy": true, 00:11:09.703 "get_zone_info": false, 00:11:09.703 "zone_management": false, 00:11:09.703 "zone_append": false, 00:11:09.703 "compare": false, 00:11:09.703 "compare_and_write": false, 00:11:09.703 "abort": true, 00:11:09.703 "seek_hole": false, 00:11:09.703 "seek_data": false, 00:11:09.703 "copy": true, 00:11:09.963 "nvme_iov_md": false 00:11:09.963 }, 00:11:09.963 "memory_domains": [ 00:11:09.963 { 00:11:09.963 "dma_device_id": "system", 00:11:09.963 "dma_device_type": 1 00:11:09.963 }, 00:11:09.963 { 00:11:09.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.963 "dma_device_type": 2 00:11:09.963 } 00:11:09.963 ], 00:11:09.963 "driver_specific": {} 00:11:09.963 } 00:11:09.963 ] 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.963 [2024-09-29 21:42:28.692900] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.963 [2024-09-29 21:42:28.692955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.963 [2024-09-29 21:42:28.692980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.963 [2024-09-29 21:42:28.695081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.963 [2024-09-29 21:42:28.695138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.963 "name": "Existed_Raid", 00:11:09.963 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:09.963 "strip_size_kb": 64, 00:11:09.963 "state": "configuring", 00:11:09.963 "raid_level": "concat", 00:11:09.963 "superblock": true, 00:11:09.963 "num_base_bdevs": 4, 00:11:09.963 "num_base_bdevs_discovered": 3, 00:11:09.963 "num_base_bdevs_operational": 4, 00:11:09.963 "base_bdevs_list": [ 00:11:09.963 { 00:11:09.963 "name": "BaseBdev1", 00:11:09.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.963 "is_configured": false, 00:11:09.963 "data_offset": 0, 00:11:09.963 "data_size": 0 00:11:09.963 }, 00:11:09.963 { 00:11:09.963 "name": "BaseBdev2", 00:11:09.963 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:09.963 "is_configured": true, 00:11:09.963 "data_offset": 2048, 00:11:09.963 "data_size": 63488 00:11:09.963 }, 00:11:09.963 { 00:11:09.963 "name": "BaseBdev3", 00:11:09.963 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:09.963 "is_configured": true, 00:11:09.963 "data_offset": 2048, 00:11:09.963 "data_size": 63488 00:11:09.963 }, 00:11:09.963 { 00:11:09.963 "name": "BaseBdev4", 00:11:09.963 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:09.963 "is_configured": true, 00:11:09.963 "data_offset": 2048, 00:11:09.963 "data_size": 63488 00:11:09.963 } 00:11:09.963 ] 00:11:09.963 }' 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.963 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.224 [2024-09-29 21:42:29.136221] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.224 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.224 "name": "Existed_Raid", 00:11:10.224 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:10.224 "strip_size_kb": 64, 00:11:10.224 "state": "configuring", 00:11:10.224 "raid_level": "concat", 00:11:10.224 "superblock": true, 00:11:10.224 "num_base_bdevs": 4, 00:11:10.224 "num_base_bdevs_discovered": 2, 00:11:10.224 "num_base_bdevs_operational": 4, 00:11:10.224 "base_bdevs_list": [ 00:11:10.224 { 00:11:10.224 "name": "BaseBdev1", 00:11:10.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.224 "is_configured": false, 00:11:10.224 "data_offset": 0, 00:11:10.224 "data_size": 0 00:11:10.224 }, 00:11:10.224 { 00:11:10.224 "name": null, 00:11:10.224 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:10.224 "is_configured": false, 00:11:10.224 "data_offset": 0, 00:11:10.224 "data_size": 63488 00:11:10.224 }, 00:11:10.224 { 00:11:10.224 "name": "BaseBdev3", 00:11:10.224 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:10.224 "is_configured": true, 00:11:10.224 "data_offset": 2048, 00:11:10.224 "data_size": 63488 00:11:10.224 }, 00:11:10.224 { 00:11:10.224 "name": "BaseBdev4", 00:11:10.224 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:10.225 "is_configured": true, 00:11:10.225 "data_offset": 2048, 00:11:10.225 "data_size": 63488 00:11:10.225 } 00:11:10.225 ] 00:11:10.225 }' 00:11:10.225 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.225 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.796 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.797 [2024-09-29 21:42:29.625241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.797 BaseBdev1 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.797 [ 00:11:10.797 { 00:11:10.797 "name": "BaseBdev1", 00:11:10.797 "aliases": [ 00:11:10.797 "67f1848c-0c5d-48f1-a906-9625a7c27cb7" 00:11:10.797 ], 00:11:10.797 "product_name": "Malloc disk", 00:11:10.797 "block_size": 512, 00:11:10.797 "num_blocks": 65536, 00:11:10.797 "uuid": "67f1848c-0c5d-48f1-a906-9625a7c27cb7", 00:11:10.797 "assigned_rate_limits": { 00:11:10.797 "rw_ios_per_sec": 0, 00:11:10.797 "rw_mbytes_per_sec": 0, 00:11:10.797 "r_mbytes_per_sec": 0, 00:11:10.797 "w_mbytes_per_sec": 0 00:11:10.797 }, 00:11:10.797 "claimed": true, 00:11:10.797 "claim_type": "exclusive_write", 00:11:10.797 "zoned": false, 00:11:10.797 "supported_io_types": { 00:11:10.797 "read": true, 00:11:10.797 "write": true, 00:11:10.797 "unmap": true, 00:11:10.797 "flush": true, 00:11:10.797 "reset": true, 00:11:10.797 "nvme_admin": false, 00:11:10.797 "nvme_io": false, 00:11:10.797 "nvme_io_md": false, 00:11:10.797 "write_zeroes": true, 00:11:10.797 "zcopy": true, 00:11:10.797 "get_zone_info": false, 00:11:10.797 "zone_management": false, 00:11:10.797 "zone_append": false, 00:11:10.797 "compare": false, 00:11:10.797 "compare_and_write": false, 00:11:10.797 "abort": true, 00:11:10.797 "seek_hole": false, 00:11:10.797 "seek_data": false, 00:11:10.797 "copy": true, 00:11:10.797 "nvme_iov_md": false 00:11:10.797 }, 00:11:10.797 "memory_domains": [ 00:11:10.797 { 00:11:10.797 "dma_device_id": "system", 00:11:10.797 "dma_device_type": 1 00:11:10.797 }, 00:11:10.797 { 00:11:10.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.797 "dma_device_type": 2 00:11:10.797 } 00:11:10.797 ], 00:11:10.797 "driver_specific": {} 00:11:10.797 } 00:11:10.797 ] 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.797 "name": "Existed_Raid", 00:11:10.797 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:10.797 "strip_size_kb": 64, 00:11:10.797 "state": "configuring", 00:11:10.797 "raid_level": "concat", 00:11:10.797 "superblock": true, 00:11:10.797 "num_base_bdevs": 4, 00:11:10.797 "num_base_bdevs_discovered": 3, 00:11:10.797 "num_base_bdevs_operational": 4, 00:11:10.797 "base_bdevs_list": [ 00:11:10.797 { 00:11:10.797 "name": "BaseBdev1", 00:11:10.797 "uuid": "67f1848c-0c5d-48f1-a906-9625a7c27cb7", 00:11:10.797 "is_configured": true, 00:11:10.797 "data_offset": 2048, 00:11:10.797 "data_size": 63488 00:11:10.797 }, 00:11:10.797 { 00:11:10.797 "name": null, 00:11:10.797 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:10.797 "is_configured": false, 00:11:10.797 "data_offset": 0, 00:11:10.797 "data_size": 63488 00:11:10.797 }, 00:11:10.797 { 00:11:10.797 "name": "BaseBdev3", 00:11:10.797 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:10.797 "is_configured": true, 00:11:10.797 "data_offset": 2048, 00:11:10.797 "data_size": 63488 00:11:10.797 }, 00:11:10.797 { 00:11:10.797 "name": "BaseBdev4", 00:11:10.797 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:10.797 "is_configured": true, 00:11:10.797 "data_offset": 2048, 00:11:10.797 "data_size": 63488 00:11:10.797 } 00:11:10.797 ] 00:11:10.797 }' 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.797 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.367 [2024-09-29 21:42:30.080495] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.367 "name": "Existed_Raid", 00:11:11.367 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:11.367 "strip_size_kb": 64, 00:11:11.367 "state": "configuring", 00:11:11.367 "raid_level": "concat", 00:11:11.367 "superblock": true, 00:11:11.367 "num_base_bdevs": 4, 00:11:11.367 "num_base_bdevs_discovered": 2, 00:11:11.367 "num_base_bdevs_operational": 4, 00:11:11.367 "base_bdevs_list": [ 00:11:11.367 { 00:11:11.367 "name": "BaseBdev1", 00:11:11.367 "uuid": "67f1848c-0c5d-48f1-a906-9625a7c27cb7", 00:11:11.367 "is_configured": true, 00:11:11.367 "data_offset": 2048, 00:11:11.367 "data_size": 63488 00:11:11.367 }, 00:11:11.367 { 00:11:11.367 "name": null, 00:11:11.367 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:11.367 "is_configured": false, 00:11:11.367 "data_offset": 0, 00:11:11.367 "data_size": 63488 00:11:11.367 }, 00:11:11.367 { 00:11:11.367 "name": null, 00:11:11.367 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:11.367 "is_configured": false, 00:11:11.367 "data_offset": 0, 00:11:11.367 "data_size": 63488 00:11:11.367 }, 00:11:11.367 { 00:11:11.367 "name": "BaseBdev4", 00:11:11.367 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:11.367 "is_configured": true, 00:11:11.367 "data_offset": 2048, 00:11:11.367 "data_size": 63488 00:11:11.367 } 00:11:11.367 ] 00:11:11.367 }' 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.367 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.628 [2024-09-29 21:42:30.523758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.628 "name": "Existed_Raid", 00:11:11.628 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:11.628 "strip_size_kb": 64, 00:11:11.628 "state": "configuring", 00:11:11.628 "raid_level": "concat", 00:11:11.628 "superblock": true, 00:11:11.628 "num_base_bdevs": 4, 00:11:11.628 "num_base_bdevs_discovered": 3, 00:11:11.628 "num_base_bdevs_operational": 4, 00:11:11.628 "base_bdevs_list": [ 00:11:11.628 { 00:11:11.628 "name": "BaseBdev1", 00:11:11.628 "uuid": "67f1848c-0c5d-48f1-a906-9625a7c27cb7", 00:11:11.628 "is_configured": true, 00:11:11.628 "data_offset": 2048, 00:11:11.628 "data_size": 63488 00:11:11.628 }, 00:11:11.628 { 00:11:11.628 "name": null, 00:11:11.628 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:11.628 "is_configured": false, 00:11:11.628 "data_offset": 0, 00:11:11.628 "data_size": 63488 00:11:11.628 }, 00:11:11.628 { 00:11:11.628 "name": "BaseBdev3", 00:11:11.628 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:11.628 "is_configured": true, 00:11:11.628 "data_offset": 2048, 00:11:11.628 "data_size": 63488 00:11:11.628 }, 00:11:11.628 { 00:11:11.628 "name": "BaseBdev4", 00:11:11.628 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:11.628 "is_configured": true, 00:11:11.628 "data_offset": 2048, 00:11:11.628 "data_size": 63488 00:11:11.628 } 00:11:11.628 ] 00:11:11.628 }' 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.628 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.198 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.198 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:12.198 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.198 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:12.198 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:12.198 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.198 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 [2024-09-29 21:42:30.994961] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.198 "name": "Existed_Raid", 00:11:12.198 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:12.198 "strip_size_kb": 64, 00:11:12.198 "state": "configuring", 00:11:12.198 "raid_level": "concat", 00:11:12.198 "superblock": true, 00:11:12.198 "num_base_bdevs": 4, 00:11:12.198 "num_base_bdevs_discovered": 2, 00:11:12.198 "num_base_bdevs_operational": 4, 00:11:12.198 "base_bdevs_list": [ 00:11:12.198 { 00:11:12.198 "name": null, 00:11:12.198 "uuid": "67f1848c-0c5d-48f1-a906-9625a7c27cb7", 00:11:12.198 "is_configured": false, 00:11:12.198 "data_offset": 0, 00:11:12.198 "data_size": 63488 00:11:12.198 }, 00:11:12.198 { 00:11:12.198 "name": null, 00:11:12.198 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:12.198 "is_configured": false, 00:11:12.198 "data_offset": 0, 00:11:12.198 "data_size": 63488 00:11:12.198 }, 00:11:12.198 { 00:11:12.198 "name": "BaseBdev3", 00:11:12.198 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:12.198 "is_configured": true, 00:11:12.198 "data_offset": 2048, 00:11:12.198 "data_size": 63488 00:11:12.198 }, 00:11:12.198 { 00:11:12.198 "name": "BaseBdev4", 00:11:12.198 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:12.198 "is_configured": true, 00:11:12.198 "data_offset": 2048, 00:11:12.198 "data_size": 63488 00:11:12.198 } 00:11:12.198 ] 00:11:12.198 }' 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.198 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.767 [2024-09-29 21:42:31.556067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.767 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.768 "name": "Existed_Raid", 00:11:12.768 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:12.768 "strip_size_kb": 64, 00:11:12.768 "state": "configuring", 00:11:12.768 "raid_level": "concat", 00:11:12.768 "superblock": true, 00:11:12.768 "num_base_bdevs": 4, 00:11:12.768 "num_base_bdevs_discovered": 3, 00:11:12.768 "num_base_bdevs_operational": 4, 00:11:12.768 "base_bdevs_list": [ 00:11:12.768 { 00:11:12.768 "name": null, 00:11:12.768 "uuid": "67f1848c-0c5d-48f1-a906-9625a7c27cb7", 00:11:12.768 "is_configured": false, 00:11:12.768 "data_offset": 0, 00:11:12.768 "data_size": 63488 00:11:12.768 }, 00:11:12.768 { 00:11:12.768 "name": "BaseBdev2", 00:11:12.768 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:12.768 "is_configured": true, 00:11:12.768 "data_offset": 2048, 00:11:12.768 "data_size": 63488 00:11:12.768 }, 00:11:12.768 { 00:11:12.768 "name": "BaseBdev3", 00:11:12.768 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:12.768 "is_configured": true, 00:11:12.768 "data_offset": 2048, 00:11:12.768 "data_size": 63488 00:11:12.768 }, 00:11:12.768 { 00:11:12.768 "name": "BaseBdev4", 00:11:12.768 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:12.768 "is_configured": true, 00:11:12.768 "data_offset": 2048, 00:11:12.768 "data_size": 63488 00:11:12.768 } 00:11:12.768 ] 00:11:12.768 }' 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.768 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.027 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.028 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.028 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.028 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:13.028 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.028 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:13.028 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 67f1848c-0c5d-48f1-a906-9625a7c27cb7 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.288 [2024-09-29 21:42:32.094102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:13.288 [2024-09-29 21:42:32.094353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:13.288 [2024-09-29 21:42:32.094367] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.288 [2024-09-29 21:42:32.094681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:13.288 [2024-09-29 21:42:32.094828] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:13.288 [2024-09-29 21:42:32.094840] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:13.288 [2024-09-29 21:42:32.094974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.288 NewBaseBdev 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.288 [ 00:11:13.288 { 00:11:13.288 "name": "NewBaseBdev", 00:11:13.288 "aliases": [ 00:11:13.288 "67f1848c-0c5d-48f1-a906-9625a7c27cb7" 00:11:13.288 ], 00:11:13.288 "product_name": "Malloc disk", 00:11:13.288 "block_size": 512, 00:11:13.288 "num_blocks": 65536, 00:11:13.288 "uuid": "67f1848c-0c5d-48f1-a906-9625a7c27cb7", 00:11:13.288 "assigned_rate_limits": { 00:11:13.288 "rw_ios_per_sec": 0, 00:11:13.288 "rw_mbytes_per_sec": 0, 00:11:13.288 "r_mbytes_per_sec": 0, 00:11:13.288 "w_mbytes_per_sec": 0 00:11:13.288 }, 00:11:13.288 "claimed": true, 00:11:13.288 "claim_type": "exclusive_write", 00:11:13.288 "zoned": false, 00:11:13.288 "supported_io_types": { 00:11:13.288 "read": true, 00:11:13.288 "write": true, 00:11:13.288 "unmap": true, 00:11:13.288 "flush": true, 00:11:13.288 "reset": true, 00:11:13.288 "nvme_admin": false, 00:11:13.288 "nvme_io": false, 00:11:13.288 "nvme_io_md": false, 00:11:13.288 "write_zeroes": true, 00:11:13.288 "zcopy": true, 00:11:13.288 "get_zone_info": false, 00:11:13.288 "zone_management": false, 00:11:13.288 "zone_append": false, 00:11:13.288 "compare": false, 00:11:13.288 "compare_and_write": false, 00:11:13.288 "abort": true, 00:11:13.288 "seek_hole": false, 00:11:13.288 "seek_data": false, 00:11:13.288 "copy": true, 00:11:13.288 "nvme_iov_md": false 00:11:13.288 }, 00:11:13.288 "memory_domains": [ 00:11:13.288 { 00:11:13.288 "dma_device_id": "system", 00:11:13.288 "dma_device_type": 1 00:11:13.288 }, 00:11:13.288 { 00:11:13.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.288 "dma_device_type": 2 00:11:13.288 } 00:11:13.288 ], 00:11:13.288 "driver_specific": {} 00:11:13.288 } 00:11:13.288 ] 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.288 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.289 "name": "Existed_Raid", 00:11:13.289 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:13.289 "strip_size_kb": 64, 00:11:13.289 "state": "online", 00:11:13.289 "raid_level": "concat", 00:11:13.289 "superblock": true, 00:11:13.289 "num_base_bdevs": 4, 00:11:13.289 "num_base_bdevs_discovered": 4, 00:11:13.289 "num_base_bdevs_operational": 4, 00:11:13.289 "base_bdevs_list": [ 00:11:13.289 { 00:11:13.289 "name": "NewBaseBdev", 00:11:13.289 "uuid": "67f1848c-0c5d-48f1-a906-9625a7c27cb7", 00:11:13.289 "is_configured": true, 00:11:13.289 "data_offset": 2048, 00:11:13.289 "data_size": 63488 00:11:13.289 }, 00:11:13.289 { 00:11:13.289 "name": "BaseBdev2", 00:11:13.289 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:13.289 "is_configured": true, 00:11:13.289 "data_offset": 2048, 00:11:13.289 "data_size": 63488 00:11:13.289 }, 00:11:13.289 { 00:11:13.289 "name": "BaseBdev3", 00:11:13.289 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:13.289 "is_configured": true, 00:11:13.289 "data_offset": 2048, 00:11:13.289 "data_size": 63488 00:11:13.289 }, 00:11:13.289 { 00:11:13.289 "name": "BaseBdev4", 00:11:13.289 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:13.289 "is_configured": true, 00:11:13.289 "data_offset": 2048, 00:11:13.289 "data_size": 63488 00:11:13.289 } 00:11:13.289 ] 00:11:13.289 }' 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.289 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.858 [2024-09-29 21:42:32.577617] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.858 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.858 "name": "Existed_Raid", 00:11:13.858 "aliases": [ 00:11:13.858 "72def2ab-7402-4d03-84ff-896bfde46257" 00:11:13.858 ], 00:11:13.858 "product_name": "Raid Volume", 00:11:13.858 "block_size": 512, 00:11:13.858 "num_blocks": 253952, 00:11:13.858 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:13.858 "assigned_rate_limits": { 00:11:13.858 "rw_ios_per_sec": 0, 00:11:13.858 "rw_mbytes_per_sec": 0, 00:11:13.858 "r_mbytes_per_sec": 0, 00:11:13.858 "w_mbytes_per_sec": 0 00:11:13.858 }, 00:11:13.858 "claimed": false, 00:11:13.858 "zoned": false, 00:11:13.858 "supported_io_types": { 00:11:13.858 "read": true, 00:11:13.858 "write": true, 00:11:13.858 "unmap": true, 00:11:13.858 "flush": true, 00:11:13.858 "reset": true, 00:11:13.858 "nvme_admin": false, 00:11:13.859 "nvme_io": false, 00:11:13.859 "nvme_io_md": false, 00:11:13.859 "write_zeroes": true, 00:11:13.859 "zcopy": false, 00:11:13.859 "get_zone_info": false, 00:11:13.859 "zone_management": false, 00:11:13.859 "zone_append": false, 00:11:13.859 "compare": false, 00:11:13.859 "compare_and_write": false, 00:11:13.859 "abort": false, 00:11:13.859 "seek_hole": false, 00:11:13.859 "seek_data": false, 00:11:13.859 "copy": false, 00:11:13.859 "nvme_iov_md": false 00:11:13.859 }, 00:11:13.859 "memory_domains": [ 00:11:13.859 { 00:11:13.859 "dma_device_id": "system", 00:11:13.859 "dma_device_type": 1 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.859 "dma_device_type": 2 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "dma_device_id": "system", 00:11:13.859 "dma_device_type": 1 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.859 "dma_device_type": 2 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "dma_device_id": "system", 00:11:13.859 "dma_device_type": 1 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.859 "dma_device_type": 2 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "dma_device_id": "system", 00:11:13.859 "dma_device_type": 1 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.859 "dma_device_type": 2 00:11:13.859 } 00:11:13.859 ], 00:11:13.859 "driver_specific": { 00:11:13.859 "raid": { 00:11:13.859 "uuid": "72def2ab-7402-4d03-84ff-896bfde46257", 00:11:13.859 "strip_size_kb": 64, 00:11:13.859 "state": "online", 00:11:13.859 "raid_level": "concat", 00:11:13.859 "superblock": true, 00:11:13.859 "num_base_bdevs": 4, 00:11:13.859 "num_base_bdevs_discovered": 4, 00:11:13.859 "num_base_bdevs_operational": 4, 00:11:13.859 "base_bdevs_list": [ 00:11:13.859 { 00:11:13.859 "name": "NewBaseBdev", 00:11:13.859 "uuid": "67f1848c-0c5d-48f1-a906-9625a7c27cb7", 00:11:13.859 "is_configured": true, 00:11:13.859 "data_offset": 2048, 00:11:13.859 "data_size": 63488 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "name": "BaseBdev2", 00:11:13.859 "uuid": "a4b5db73-173e-4f16-afb0-ab50433de889", 00:11:13.859 "is_configured": true, 00:11:13.859 "data_offset": 2048, 00:11:13.859 "data_size": 63488 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "name": "BaseBdev3", 00:11:13.859 "uuid": "80257c0c-1808-4913-955f-22f753764244", 00:11:13.859 "is_configured": true, 00:11:13.859 "data_offset": 2048, 00:11:13.859 "data_size": 63488 00:11:13.859 }, 00:11:13.859 { 00:11:13.859 "name": "BaseBdev4", 00:11:13.859 "uuid": "ba3ce82e-9fe3-40a4-bd11-5ef2e493f0e9", 00:11:13.859 "is_configured": true, 00:11:13.859 "data_offset": 2048, 00:11:13.859 "data_size": 63488 00:11:13.859 } 00:11:13.859 ] 00:11:13.859 } 00:11:13.859 } 00:11:13.859 }' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:13.859 BaseBdev2 00:11:13.859 BaseBdev3 00:11:13.859 BaseBdev4' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.859 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.119 [2024-09-29 21:42:32.880723] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.119 [2024-09-29 21:42:32.880756] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.119 [2024-09-29 21:42:32.880830] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.119 [2024-09-29 21:42:32.880903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.119 [2024-09-29 21:42:32.880918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72036 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72036 ']' 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72036 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72036 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.119 killing process with pid 72036 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72036' 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72036 00:11:14.119 [2024-09-29 21:42:32.928636] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:14.119 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72036 00:11:14.385 [2024-09-29 21:42:33.342344] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.777 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:15.777 00:11:15.777 real 0m11.638s 00:11:15.777 user 0m18.083s 00:11:15.777 sys 0m2.231s 00:11:15.777 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.777 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.777 ************************************ 00:11:15.777 END TEST raid_state_function_test_sb 00:11:15.777 ************************************ 00:11:15.777 21:42:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:15.777 21:42:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:15.777 21:42:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.777 21:42:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.777 ************************************ 00:11:15.777 START TEST raid_superblock_test 00:11:15.777 ************************************ 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:15.777 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:16.038 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72712 00:11:16.038 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:16.038 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72712 00:11:16.038 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72712 ']' 00:11:16.038 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.038 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:16.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.038 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.038 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:16.038 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.038 [2024-09-29 21:42:34.859658] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:16.038 [2024-09-29 21:42:34.859828] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72712 ] 00:11:16.298 [2024-09-29 21:42:35.029611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.298 [2024-09-29 21:42:35.269667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.558 [2024-09-29 21:42:35.497320] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.558 [2024-09-29 21:42:35.497356] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.817 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:16.817 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:16.817 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:16.817 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.818 malloc1 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.818 [2024-09-29 21:42:35.729871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:16.818 [2024-09-29 21:42:35.729943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.818 [2024-09-29 21:42:35.729969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:16.818 [2024-09-29 21:42:35.729981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.818 [2024-09-29 21:42:35.732385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.818 [2024-09-29 21:42:35.732423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:16.818 pt1 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.818 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.078 malloc2 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.078 [2024-09-29 21:42:35.820990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.078 [2024-09-29 21:42:35.821058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.078 [2024-09-29 21:42:35.821084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:17.078 [2024-09-29 21:42:35.821093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.078 [2024-09-29 21:42:35.823428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.078 [2024-09-29 21:42:35.823464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.078 pt2 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.078 malloc3 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.078 [2024-09-29 21:42:35.882370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.078 [2024-09-29 21:42:35.882421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.078 [2024-09-29 21:42:35.882443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:17.078 [2024-09-29 21:42:35.882452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.078 [2024-09-29 21:42:35.884798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.078 [2024-09-29 21:42:35.884836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.078 pt3 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.078 malloc4 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.078 [2024-09-29 21:42:35.942165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:17.078 [2024-09-29 21:42:35.942215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.078 [2024-09-29 21:42:35.942233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:17.078 [2024-09-29 21:42:35.942242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.078 [2024-09-29 21:42:35.944544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.078 [2024-09-29 21:42:35.944580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:17.078 pt4 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.078 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 [2024-09-29 21:42:35.954227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:17.079 [2024-09-29 21:42:35.956277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.079 [2024-09-29 21:42:35.956343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.079 [2024-09-29 21:42:35.956419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:17.079 [2024-09-29 21:42:35.956613] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:17.079 [2024-09-29 21:42:35.956636] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:17.079 [2024-09-29 21:42:35.956887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:17.079 [2024-09-29 21:42:35.957069] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:17.079 [2024-09-29 21:42:35.957089] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:17.079 [2024-09-29 21:42:35.957229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.079 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.079 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.079 "name": "raid_bdev1", 00:11:17.079 "uuid": "d1b16f00-a280-4809-a7d7-348b34d6c4ac", 00:11:17.079 "strip_size_kb": 64, 00:11:17.079 "state": "online", 00:11:17.079 "raid_level": "concat", 00:11:17.079 "superblock": true, 00:11:17.079 "num_base_bdevs": 4, 00:11:17.079 "num_base_bdevs_discovered": 4, 00:11:17.079 "num_base_bdevs_operational": 4, 00:11:17.079 "base_bdevs_list": [ 00:11:17.079 { 00:11:17.079 "name": "pt1", 00:11:17.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.079 "is_configured": true, 00:11:17.079 "data_offset": 2048, 00:11:17.079 "data_size": 63488 00:11:17.079 }, 00:11:17.079 { 00:11:17.079 "name": "pt2", 00:11:17.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.079 "is_configured": true, 00:11:17.079 "data_offset": 2048, 00:11:17.079 "data_size": 63488 00:11:17.079 }, 00:11:17.079 { 00:11:17.079 "name": "pt3", 00:11:17.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.079 "is_configured": true, 00:11:17.079 "data_offset": 2048, 00:11:17.079 "data_size": 63488 00:11:17.079 }, 00:11:17.079 { 00:11:17.079 "name": "pt4", 00:11:17.079 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.079 "is_configured": true, 00:11:17.079 "data_offset": 2048, 00:11:17.079 "data_size": 63488 00:11:17.079 } 00:11:17.079 ] 00:11:17.079 }' 00:11:17.079 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.079 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.648 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:17.648 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:17.648 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:17.648 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:17.648 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:17.648 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:17.648 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.649 [2024-09-29 21:42:36.405691] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:17.649 "name": "raid_bdev1", 00:11:17.649 "aliases": [ 00:11:17.649 "d1b16f00-a280-4809-a7d7-348b34d6c4ac" 00:11:17.649 ], 00:11:17.649 "product_name": "Raid Volume", 00:11:17.649 "block_size": 512, 00:11:17.649 "num_blocks": 253952, 00:11:17.649 "uuid": "d1b16f00-a280-4809-a7d7-348b34d6c4ac", 00:11:17.649 "assigned_rate_limits": { 00:11:17.649 "rw_ios_per_sec": 0, 00:11:17.649 "rw_mbytes_per_sec": 0, 00:11:17.649 "r_mbytes_per_sec": 0, 00:11:17.649 "w_mbytes_per_sec": 0 00:11:17.649 }, 00:11:17.649 "claimed": false, 00:11:17.649 "zoned": false, 00:11:17.649 "supported_io_types": { 00:11:17.649 "read": true, 00:11:17.649 "write": true, 00:11:17.649 "unmap": true, 00:11:17.649 "flush": true, 00:11:17.649 "reset": true, 00:11:17.649 "nvme_admin": false, 00:11:17.649 "nvme_io": false, 00:11:17.649 "nvme_io_md": false, 00:11:17.649 "write_zeroes": true, 00:11:17.649 "zcopy": false, 00:11:17.649 "get_zone_info": false, 00:11:17.649 "zone_management": false, 00:11:17.649 "zone_append": false, 00:11:17.649 "compare": false, 00:11:17.649 "compare_and_write": false, 00:11:17.649 "abort": false, 00:11:17.649 "seek_hole": false, 00:11:17.649 "seek_data": false, 00:11:17.649 "copy": false, 00:11:17.649 "nvme_iov_md": false 00:11:17.649 }, 00:11:17.649 "memory_domains": [ 00:11:17.649 { 00:11:17.649 "dma_device_id": "system", 00:11:17.649 "dma_device_type": 1 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.649 "dma_device_type": 2 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "dma_device_id": "system", 00:11:17.649 "dma_device_type": 1 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.649 "dma_device_type": 2 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "dma_device_id": "system", 00:11:17.649 "dma_device_type": 1 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.649 "dma_device_type": 2 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "dma_device_id": "system", 00:11:17.649 "dma_device_type": 1 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.649 "dma_device_type": 2 00:11:17.649 } 00:11:17.649 ], 00:11:17.649 "driver_specific": { 00:11:17.649 "raid": { 00:11:17.649 "uuid": "d1b16f00-a280-4809-a7d7-348b34d6c4ac", 00:11:17.649 "strip_size_kb": 64, 00:11:17.649 "state": "online", 00:11:17.649 "raid_level": "concat", 00:11:17.649 "superblock": true, 00:11:17.649 "num_base_bdevs": 4, 00:11:17.649 "num_base_bdevs_discovered": 4, 00:11:17.649 "num_base_bdevs_operational": 4, 00:11:17.649 "base_bdevs_list": [ 00:11:17.649 { 00:11:17.649 "name": "pt1", 00:11:17.649 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:17.649 "is_configured": true, 00:11:17.649 "data_offset": 2048, 00:11:17.649 "data_size": 63488 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "name": "pt2", 00:11:17.649 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:17.649 "is_configured": true, 00:11:17.649 "data_offset": 2048, 00:11:17.649 "data_size": 63488 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "name": "pt3", 00:11:17.649 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:17.649 "is_configured": true, 00:11:17.649 "data_offset": 2048, 00:11:17.649 "data_size": 63488 00:11:17.649 }, 00:11:17.649 { 00:11:17.649 "name": "pt4", 00:11:17.649 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:17.649 "is_configured": true, 00:11:17.649 "data_offset": 2048, 00:11:17.649 "data_size": 63488 00:11:17.649 } 00:11:17.649 ] 00:11:17.649 } 00:11:17.649 } 00:11:17.649 }' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:17.649 pt2 00:11:17.649 pt3 00:11:17.649 pt4' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.649 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:17.909 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.910 [2024-09-29 21:42:36.725131] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d1b16f00-a280-4809-a7d7-348b34d6c4ac 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d1b16f00-a280-4809-a7d7-348b34d6c4ac ']' 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.910 [2024-09-29 21:42:36.756783] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:17.910 [2024-09-29 21:42:36.756814] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.910 [2024-09-29 21:42:36.756885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.910 [2024-09-29 21:42:36.756953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.910 [2024-09-29 21:42:36.756980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:17.910 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.170 [2024-09-29 21:42:36.900554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:18.170 [2024-09-29 21:42:36.902665] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:18.170 [2024-09-29 21:42:36.902718] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:18.170 [2024-09-29 21:42:36.902750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:18.170 [2024-09-29 21:42:36.902814] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:18.170 [2024-09-29 21:42:36.902857] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:18.170 [2024-09-29 21:42:36.902879] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:18.170 [2024-09-29 21:42:36.902897] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:18.170 [2024-09-29 21:42:36.902910] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:18.170 [2024-09-29 21:42:36.902920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:18.170 request: 00:11:18.170 { 00:11:18.170 "name": "raid_bdev1", 00:11:18.170 "raid_level": "concat", 00:11:18.170 "base_bdevs": [ 00:11:18.170 "malloc1", 00:11:18.170 "malloc2", 00:11:18.170 "malloc3", 00:11:18.170 "malloc4" 00:11:18.170 ], 00:11:18.170 "strip_size_kb": 64, 00:11:18.170 "superblock": false, 00:11:18.170 "method": "bdev_raid_create", 00:11:18.170 "req_id": 1 00:11:18.170 } 00:11:18.170 Got JSON-RPC error response 00:11:18.170 response: 00:11:18.170 { 00:11:18.170 "code": -17, 00:11:18.170 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:18.170 } 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.170 [2024-09-29 21:42:36.984383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:18.170 [2024-09-29 21:42:36.984433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.170 [2024-09-29 21:42:36.984447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:18.170 [2024-09-29 21:42:36.984459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.170 [2024-09-29 21:42:36.986839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.170 [2024-09-29 21:42:36.986882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:18.170 [2024-09-29 21:42:36.986950] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:18.170 [2024-09-29 21:42:36.987033] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:18.170 pt1 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.170 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.170 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.170 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.170 "name": "raid_bdev1", 00:11:18.170 "uuid": "d1b16f00-a280-4809-a7d7-348b34d6c4ac", 00:11:18.170 "strip_size_kb": 64, 00:11:18.170 "state": "configuring", 00:11:18.170 "raid_level": "concat", 00:11:18.170 "superblock": true, 00:11:18.170 "num_base_bdevs": 4, 00:11:18.170 "num_base_bdevs_discovered": 1, 00:11:18.171 "num_base_bdevs_operational": 4, 00:11:18.171 "base_bdevs_list": [ 00:11:18.171 { 00:11:18.171 "name": "pt1", 00:11:18.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.171 "is_configured": true, 00:11:18.171 "data_offset": 2048, 00:11:18.171 "data_size": 63488 00:11:18.171 }, 00:11:18.171 { 00:11:18.171 "name": null, 00:11:18.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.171 "is_configured": false, 00:11:18.171 "data_offset": 2048, 00:11:18.171 "data_size": 63488 00:11:18.171 }, 00:11:18.171 { 00:11:18.171 "name": null, 00:11:18.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.171 "is_configured": false, 00:11:18.171 "data_offset": 2048, 00:11:18.171 "data_size": 63488 00:11:18.171 }, 00:11:18.171 { 00:11:18.171 "name": null, 00:11:18.171 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.171 "is_configured": false, 00:11:18.171 "data_offset": 2048, 00:11:18.171 "data_size": 63488 00:11:18.171 } 00:11:18.171 ] 00:11:18.171 }' 00:11:18.171 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.171 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.430 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:18.430 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.430 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.430 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.430 [2024-09-29 21:42:37.411646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.430 [2024-09-29 21:42:37.411699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.430 [2024-09-29 21:42:37.411715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:18.430 [2024-09-29 21:42:37.411726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.430 [2024-09-29 21:42:37.412181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.430 [2024-09-29 21:42:37.412212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.430 [2024-09-29 21:42:37.412276] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.430 [2024-09-29 21:42:37.412300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.692 pt2 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.692 [2024-09-29 21:42:37.423646] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.692 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.693 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.693 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.693 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.693 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.693 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.693 "name": "raid_bdev1", 00:11:18.693 "uuid": "d1b16f00-a280-4809-a7d7-348b34d6c4ac", 00:11:18.693 "strip_size_kb": 64, 00:11:18.693 "state": "configuring", 00:11:18.693 "raid_level": "concat", 00:11:18.693 "superblock": true, 00:11:18.693 "num_base_bdevs": 4, 00:11:18.693 "num_base_bdevs_discovered": 1, 00:11:18.693 "num_base_bdevs_operational": 4, 00:11:18.693 "base_bdevs_list": [ 00:11:18.693 { 00:11:18.693 "name": "pt1", 00:11:18.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.693 "is_configured": true, 00:11:18.693 "data_offset": 2048, 00:11:18.693 "data_size": 63488 00:11:18.693 }, 00:11:18.693 { 00:11:18.693 "name": null, 00:11:18.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.693 "is_configured": false, 00:11:18.693 "data_offset": 0, 00:11:18.693 "data_size": 63488 00:11:18.693 }, 00:11:18.693 { 00:11:18.693 "name": null, 00:11:18.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.693 "is_configured": false, 00:11:18.693 "data_offset": 2048, 00:11:18.693 "data_size": 63488 00:11:18.693 }, 00:11:18.693 { 00:11:18.693 "name": null, 00:11:18.693 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:18.693 "is_configured": false, 00:11:18.693 "data_offset": 2048, 00:11:18.693 "data_size": 63488 00:11:18.693 } 00:11:18.693 ] 00:11:18.693 }' 00:11:18.693 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.693 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 [2024-09-29 21:42:37.922858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:18.952 [2024-09-29 21:42:37.922925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.952 [2024-09-29 21:42:37.922946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:18.952 [2024-09-29 21:42:37.922976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.952 [2024-09-29 21:42:37.923465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.952 [2024-09-29 21:42:37.923491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:18.952 [2024-09-29 21:42:37.923576] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:18.952 [2024-09-29 21:42:37.923613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:18.952 pt2 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.952 [2024-09-29 21:42:37.930822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:18.952 [2024-09-29 21:42:37.930871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.952 [2024-09-29 21:42:37.930896] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:18.952 [2024-09-29 21:42:37.930924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.952 [2024-09-29 21:42:37.931311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.952 [2024-09-29 21:42:37.931334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:18.952 [2024-09-29 21:42:37.931398] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:18.952 [2024-09-29 21:42:37.931419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:18.952 pt3 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.952 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.212 [2024-09-29 21:42:37.938797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:19.212 [2024-09-29 21:42:37.938842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.212 [2024-09-29 21:42:37.938861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:19.212 [2024-09-29 21:42:37.938869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.212 [2024-09-29 21:42:37.939257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.212 [2024-09-29 21:42:37.939284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:19.212 [2024-09-29 21:42:37.939346] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:19.212 [2024-09-29 21:42:37.939374] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:19.212 [2024-09-29 21:42:37.939515] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:19.212 [2024-09-29 21:42:37.939528] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:19.212 [2024-09-29 21:42:37.939778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:19.212 [2024-09-29 21:42:37.939916] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:19.212 [2024-09-29 21:42:37.939932] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:19.212 [2024-09-29 21:42:37.940080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.212 pt4 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.212 "name": "raid_bdev1", 00:11:19.212 "uuid": "d1b16f00-a280-4809-a7d7-348b34d6c4ac", 00:11:19.212 "strip_size_kb": 64, 00:11:19.212 "state": "online", 00:11:19.212 "raid_level": "concat", 00:11:19.212 "superblock": true, 00:11:19.212 "num_base_bdevs": 4, 00:11:19.212 "num_base_bdevs_discovered": 4, 00:11:19.212 "num_base_bdevs_operational": 4, 00:11:19.212 "base_bdevs_list": [ 00:11:19.212 { 00:11:19.212 "name": "pt1", 00:11:19.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.212 "is_configured": true, 00:11:19.212 "data_offset": 2048, 00:11:19.212 "data_size": 63488 00:11:19.212 }, 00:11:19.212 { 00:11:19.212 "name": "pt2", 00:11:19.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.212 "is_configured": true, 00:11:19.212 "data_offset": 2048, 00:11:19.212 "data_size": 63488 00:11:19.212 }, 00:11:19.212 { 00:11:19.212 "name": "pt3", 00:11:19.212 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.212 "is_configured": true, 00:11:19.212 "data_offset": 2048, 00:11:19.212 "data_size": 63488 00:11:19.212 }, 00:11:19.212 { 00:11:19.212 "name": "pt4", 00:11:19.212 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.212 "is_configured": true, 00:11:19.212 "data_offset": 2048, 00:11:19.212 "data_size": 63488 00:11:19.212 } 00:11:19.212 ] 00:11:19.212 }' 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.212 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.471 [2024-09-29 21:42:38.314474] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.471 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.472 "name": "raid_bdev1", 00:11:19.472 "aliases": [ 00:11:19.472 "d1b16f00-a280-4809-a7d7-348b34d6c4ac" 00:11:19.472 ], 00:11:19.472 "product_name": "Raid Volume", 00:11:19.472 "block_size": 512, 00:11:19.472 "num_blocks": 253952, 00:11:19.472 "uuid": "d1b16f00-a280-4809-a7d7-348b34d6c4ac", 00:11:19.472 "assigned_rate_limits": { 00:11:19.472 "rw_ios_per_sec": 0, 00:11:19.472 "rw_mbytes_per_sec": 0, 00:11:19.472 "r_mbytes_per_sec": 0, 00:11:19.472 "w_mbytes_per_sec": 0 00:11:19.472 }, 00:11:19.472 "claimed": false, 00:11:19.472 "zoned": false, 00:11:19.472 "supported_io_types": { 00:11:19.472 "read": true, 00:11:19.472 "write": true, 00:11:19.472 "unmap": true, 00:11:19.472 "flush": true, 00:11:19.472 "reset": true, 00:11:19.472 "nvme_admin": false, 00:11:19.472 "nvme_io": false, 00:11:19.472 "nvme_io_md": false, 00:11:19.472 "write_zeroes": true, 00:11:19.472 "zcopy": false, 00:11:19.472 "get_zone_info": false, 00:11:19.472 "zone_management": false, 00:11:19.472 "zone_append": false, 00:11:19.472 "compare": false, 00:11:19.472 "compare_and_write": false, 00:11:19.472 "abort": false, 00:11:19.472 "seek_hole": false, 00:11:19.472 "seek_data": false, 00:11:19.472 "copy": false, 00:11:19.472 "nvme_iov_md": false 00:11:19.472 }, 00:11:19.472 "memory_domains": [ 00:11:19.472 { 00:11:19.472 "dma_device_id": "system", 00:11:19.472 "dma_device_type": 1 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.472 "dma_device_type": 2 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "dma_device_id": "system", 00:11:19.472 "dma_device_type": 1 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.472 "dma_device_type": 2 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "dma_device_id": "system", 00:11:19.472 "dma_device_type": 1 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.472 "dma_device_type": 2 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "dma_device_id": "system", 00:11:19.472 "dma_device_type": 1 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.472 "dma_device_type": 2 00:11:19.472 } 00:11:19.472 ], 00:11:19.472 "driver_specific": { 00:11:19.472 "raid": { 00:11:19.472 "uuid": "d1b16f00-a280-4809-a7d7-348b34d6c4ac", 00:11:19.472 "strip_size_kb": 64, 00:11:19.472 "state": "online", 00:11:19.472 "raid_level": "concat", 00:11:19.472 "superblock": true, 00:11:19.472 "num_base_bdevs": 4, 00:11:19.472 "num_base_bdevs_discovered": 4, 00:11:19.472 "num_base_bdevs_operational": 4, 00:11:19.472 "base_bdevs_list": [ 00:11:19.472 { 00:11:19.472 "name": "pt1", 00:11:19.472 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:19.472 "is_configured": true, 00:11:19.472 "data_offset": 2048, 00:11:19.472 "data_size": 63488 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "name": "pt2", 00:11:19.472 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.472 "is_configured": true, 00:11:19.472 "data_offset": 2048, 00:11:19.472 "data_size": 63488 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "name": "pt3", 00:11:19.472 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.472 "is_configured": true, 00:11:19.472 "data_offset": 2048, 00:11:19.472 "data_size": 63488 00:11:19.472 }, 00:11:19.472 { 00:11:19.472 "name": "pt4", 00:11:19.472 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:19.472 "is_configured": true, 00:11:19.472 "data_offset": 2048, 00:11:19.472 "data_size": 63488 00:11:19.472 } 00:11:19.472 ] 00:11:19.472 } 00:11:19.472 } 00:11:19.472 }' 00:11:19.472 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.472 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:19.472 pt2 00:11:19.472 pt3 00:11:19.472 pt4' 00:11:19.472 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.472 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.472 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.472 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:19.472 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.472 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.731 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.732 [2024-09-29 21:42:38.637856] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d1b16f00-a280-4809-a7d7-348b34d6c4ac '!=' d1b16f00-a280-4809-a7d7-348b34d6c4ac ']' 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72712 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72712 ']' 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72712 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72712 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:19.732 killing process with pid 72712 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72712' 00:11:19.732 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72712 00:11:19.732 [2024-09-29 21:42:38.711574] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.732 [2024-09-29 21:42:38.711660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.732 [2024-09-29 21:42:38.711741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72712 00:11:19.732 ee all in destruct 00:11:19.732 [2024-09-29 21:42:38.711758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:20.300 [2024-09-29 21:42:39.127600] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.680 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:21.680 00:11:21.680 real 0m5.694s 00:11:21.680 user 0m7.808s 00:11:21.680 sys 0m1.162s 00:11:21.680 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.680 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.680 ************************************ 00:11:21.680 END TEST raid_superblock_test 00:11:21.680 ************************************ 00:11:21.680 21:42:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:21.680 21:42:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:21.680 21:42:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.680 21:42:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.680 ************************************ 00:11:21.680 START TEST raid_read_error_test 00:11:21.680 ************************************ 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IXflZFOrzw 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72971 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72971 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72971 ']' 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.680 21:42:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.680 [2024-09-29 21:42:40.631398] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:21.680 [2024-09-29 21:42:40.631530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72971 ] 00:11:21.940 [2024-09-29 21:42:40.797353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.200 [2024-09-29 21:42:41.043937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.459 [2024-09-29 21:42:41.270876] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.459 [2024-09-29 21:42:41.270913] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 BaseBdev1_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 true 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 [2024-09-29 21:42:41.511764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:22.720 [2024-09-29 21:42:41.511830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.720 [2024-09-29 21:42:41.511848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:22.720 [2024-09-29 21:42:41.511859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.720 [2024-09-29 21:42:41.514215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.720 [2024-09-29 21:42:41.514255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:22.720 BaseBdev1 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 BaseBdev2_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 true 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 [2024-09-29 21:42:41.600419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:22.720 [2024-09-29 21:42:41.600474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.720 [2024-09-29 21:42:41.600490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:22.720 [2024-09-29 21:42:41.600501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.720 [2024-09-29 21:42:41.602791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.720 [2024-09-29 21:42:41.602829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:22.720 BaseBdev2 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 BaseBdev3_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 true 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.720 [2024-09-29 21:42:41.672985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:22.720 [2024-09-29 21:42:41.673046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.720 [2024-09-29 21:42:41.673064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:22.720 [2024-09-29 21:42:41.673075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.720 [2024-09-29 21:42:41.675386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.720 [2024-09-29 21:42:41.675423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:22.720 BaseBdev3 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.720 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.980 BaseBdev4_malloc 00:11:22.980 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.980 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:22.980 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.980 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.980 true 00:11:22.980 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.980 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:22.980 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.980 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.980 [2024-09-29 21:42:41.746136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:22.980 [2024-09-29 21:42:41.746192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.980 [2024-09-29 21:42:41.746211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:22.980 [2024-09-29 21:42:41.746222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.980 [2024-09-29 21:42:41.748589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.980 [2024-09-29 21:42:41.748632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:22.980 BaseBdev4 00:11:22.980 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.981 [2024-09-29 21:42:41.758233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.981 [2024-09-29 21:42:41.760333] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:22.981 [2024-09-29 21:42:41.760413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.981 [2024-09-29 21:42:41.760471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:22.981 [2024-09-29 21:42:41.760691] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:22.981 [2024-09-29 21:42:41.760711] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:22.981 [2024-09-29 21:42:41.760950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:22.981 [2024-09-29 21:42:41.761125] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:22.981 [2024-09-29 21:42:41.761135] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:22.981 [2024-09-29 21:42:41.761291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.981 "name": "raid_bdev1", 00:11:22.981 "uuid": "a54838e8-d576-4d3a-b5d1-c94e42c8fe6e", 00:11:22.981 "strip_size_kb": 64, 00:11:22.981 "state": "online", 00:11:22.981 "raid_level": "concat", 00:11:22.981 "superblock": true, 00:11:22.981 "num_base_bdevs": 4, 00:11:22.981 "num_base_bdevs_discovered": 4, 00:11:22.981 "num_base_bdevs_operational": 4, 00:11:22.981 "base_bdevs_list": [ 00:11:22.981 { 00:11:22.981 "name": "BaseBdev1", 00:11:22.981 "uuid": "89d9ff3d-4f26-5f55-a62a-1f5b1dff3e58", 00:11:22.981 "is_configured": true, 00:11:22.981 "data_offset": 2048, 00:11:22.981 "data_size": 63488 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "name": "BaseBdev2", 00:11:22.981 "uuid": "d9a37fb1-84e3-5623-97ac-c40fee08c8d0", 00:11:22.981 "is_configured": true, 00:11:22.981 "data_offset": 2048, 00:11:22.981 "data_size": 63488 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "name": "BaseBdev3", 00:11:22.981 "uuid": "1ff70db6-0d78-5249-9bee-042b43f01410", 00:11:22.981 "is_configured": true, 00:11:22.981 "data_offset": 2048, 00:11:22.981 "data_size": 63488 00:11:22.981 }, 00:11:22.981 { 00:11:22.981 "name": "BaseBdev4", 00:11:22.981 "uuid": "ec50b410-af6a-5812-bcdc-8deec22f55f0", 00:11:22.981 "is_configured": true, 00:11:22.981 "data_offset": 2048, 00:11:22.981 "data_size": 63488 00:11:22.981 } 00:11:22.981 ] 00:11:22.981 }' 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.981 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.240 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:23.240 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:23.499 [2024-09-29 21:42:42.314691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:24.438 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:24.438 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.438 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.438 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.438 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:24.438 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:24.438 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:24.438 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:24.438 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.439 "name": "raid_bdev1", 00:11:24.439 "uuid": "a54838e8-d576-4d3a-b5d1-c94e42c8fe6e", 00:11:24.439 "strip_size_kb": 64, 00:11:24.439 "state": "online", 00:11:24.439 "raid_level": "concat", 00:11:24.439 "superblock": true, 00:11:24.439 "num_base_bdevs": 4, 00:11:24.439 "num_base_bdevs_discovered": 4, 00:11:24.439 "num_base_bdevs_operational": 4, 00:11:24.439 "base_bdevs_list": [ 00:11:24.439 { 00:11:24.439 "name": "BaseBdev1", 00:11:24.439 "uuid": "89d9ff3d-4f26-5f55-a62a-1f5b1dff3e58", 00:11:24.439 "is_configured": true, 00:11:24.439 "data_offset": 2048, 00:11:24.439 "data_size": 63488 00:11:24.439 }, 00:11:24.439 { 00:11:24.439 "name": "BaseBdev2", 00:11:24.439 "uuid": "d9a37fb1-84e3-5623-97ac-c40fee08c8d0", 00:11:24.439 "is_configured": true, 00:11:24.439 "data_offset": 2048, 00:11:24.439 "data_size": 63488 00:11:24.439 }, 00:11:24.439 { 00:11:24.439 "name": "BaseBdev3", 00:11:24.439 "uuid": "1ff70db6-0d78-5249-9bee-042b43f01410", 00:11:24.439 "is_configured": true, 00:11:24.439 "data_offset": 2048, 00:11:24.439 "data_size": 63488 00:11:24.439 }, 00:11:24.439 { 00:11:24.439 "name": "BaseBdev4", 00:11:24.439 "uuid": "ec50b410-af6a-5812-bcdc-8deec22f55f0", 00:11:24.439 "is_configured": true, 00:11:24.439 "data_offset": 2048, 00:11:24.439 "data_size": 63488 00:11:24.439 } 00:11:24.439 ] 00:11:24.439 }' 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.439 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.699 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:24.699 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.699 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.699 [2024-09-29 21:42:43.670634] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:24.699 [2024-09-29 21:42:43.670680] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.699 [2024-09-29 21:42:43.673309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.699 [2024-09-29 21:42:43.673379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.699 [2024-09-29 21:42:43.673438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.699 [2024-09-29 21:42:43.673482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:24.699 { 00:11:24.699 "results": [ 00:11:24.699 { 00:11:24.699 "job": "raid_bdev1", 00:11:24.699 "core_mask": "0x1", 00:11:24.699 "workload": "randrw", 00:11:24.699 "percentage": 50, 00:11:24.699 "status": "finished", 00:11:24.699 "queue_depth": 1, 00:11:24.699 "io_size": 131072, 00:11:24.699 "runtime": 1.356599, 00:11:24.699 "iops": 14445.683654491859, 00:11:24.699 "mibps": 1805.7104568114823, 00:11:24.699 "io_failed": 1, 00:11:24.699 "io_timeout": 0, 00:11:24.699 "avg_latency_us": 97.62602030061885, 00:11:24.699 "min_latency_us": 24.705676855895195, 00:11:24.699 "max_latency_us": 1316.4436681222708 00:11:24.699 } 00:11:24.699 ], 00:11:24.699 "core_count": 1 00:11:24.699 } 00:11:24.699 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.699 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72971 00:11:24.699 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72971 ']' 00:11:24.699 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72971 00:11:24.699 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:24.959 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.959 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72971 00:11:24.959 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:24.959 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:24.959 killing process with pid 72971 00:11:24.959 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72971' 00:11:24.959 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72971 00:11:24.959 [2024-09-29 21:42:43.713322] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.959 21:42:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72971 00:11:25.218 [2024-09-29 21:42:44.053165] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IXflZFOrzw 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:26.597 00:11:26.597 real 0m4.910s 00:11:26.597 user 0m5.608s 00:11:26.597 sys 0m0.715s 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.597 21:42:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.597 ************************************ 00:11:26.597 END TEST raid_read_error_test 00:11:26.597 ************************************ 00:11:26.597 21:42:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:26.597 21:42:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:26.597 21:42:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.597 21:42:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.597 ************************************ 00:11:26.597 START TEST raid_write_error_test 00:11:26.597 ************************************ 00:11:26.597 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:11:26.597 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:26.597 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:26.597 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hxdhK0ChG1 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73122 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73122 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73122 ']' 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.598 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.856 [2024-09-29 21:42:45.622470] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:26.856 [2024-09-29 21:42:45.622591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73122 ] 00:11:26.857 [2024-09-29 21:42:45.791053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.115 [2024-09-29 21:42:46.034015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.378 [2024-09-29 21:42:46.261713] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.378 [2024-09-29 21:42:46.261754] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.637 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.637 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.638 BaseBdev1_malloc 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.638 true 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.638 [2024-09-29 21:42:46.510754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:27.638 [2024-09-29 21:42:46.510818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.638 [2024-09-29 21:42:46.510852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:27.638 [2024-09-29 21:42:46.510864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.638 [2024-09-29 21:42:46.513241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.638 [2024-09-29 21:42:46.513282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:27.638 BaseBdev1 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.638 BaseBdev2_malloc 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.638 true 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.638 [2024-09-29 21:42:46.602219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:27.638 [2024-09-29 21:42:46.602280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.638 [2024-09-29 21:42:46.602313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:27.638 [2024-09-29 21:42:46.602335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.638 [2024-09-29 21:42:46.604678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.638 [2024-09-29 21:42:46.604717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:27.638 BaseBdev2 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.638 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.898 BaseBdev3_malloc 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.898 true 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.898 [2024-09-29 21:42:46.675350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:27.898 [2024-09-29 21:42:46.675406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.898 [2024-09-29 21:42:46.675422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:27.898 [2024-09-29 21:42:46.675433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.898 [2024-09-29 21:42:46.677795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.898 [2024-09-29 21:42:46.677837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:27.898 BaseBdev3 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.898 BaseBdev4_malloc 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.898 true 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.898 [2024-09-29 21:42:46.748928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:27.898 [2024-09-29 21:42:46.748982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.898 [2024-09-29 21:42:46.749015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:27.898 [2024-09-29 21:42:46.749027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.898 [2024-09-29 21:42:46.751287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.898 [2024-09-29 21:42:46.751324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:27.898 BaseBdev4 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.898 [2024-09-29 21:42:46.760993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.898 [2024-09-29 21:42:46.763003] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.898 [2024-09-29 21:42:46.763087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.898 [2024-09-29 21:42:46.763144] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:27.898 [2024-09-29 21:42:46.763373] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:27.898 [2024-09-29 21:42:46.763393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:27.898 [2024-09-29 21:42:46.763627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:27.898 [2024-09-29 21:42:46.763795] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:27.898 [2024-09-29 21:42:46.763807] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:27.898 [2024-09-29 21:42:46.763963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.898 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.899 "name": "raid_bdev1", 00:11:27.899 "uuid": "7b7c3fc8-1292-4b83-883a-89fa4d17061e", 00:11:27.899 "strip_size_kb": 64, 00:11:27.899 "state": "online", 00:11:27.899 "raid_level": "concat", 00:11:27.899 "superblock": true, 00:11:27.899 "num_base_bdevs": 4, 00:11:27.899 "num_base_bdevs_discovered": 4, 00:11:27.899 "num_base_bdevs_operational": 4, 00:11:27.899 "base_bdevs_list": [ 00:11:27.899 { 00:11:27.899 "name": "BaseBdev1", 00:11:27.899 "uuid": "16568d1f-efc9-534d-b0f7-cfe0b2a31715", 00:11:27.899 "is_configured": true, 00:11:27.899 "data_offset": 2048, 00:11:27.899 "data_size": 63488 00:11:27.899 }, 00:11:27.899 { 00:11:27.899 "name": "BaseBdev2", 00:11:27.899 "uuid": "94eee69c-5987-55ef-9b50-4da16852cc18", 00:11:27.899 "is_configured": true, 00:11:27.899 "data_offset": 2048, 00:11:27.899 "data_size": 63488 00:11:27.899 }, 00:11:27.899 { 00:11:27.899 "name": "BaseBdev3", 00:11:27.899 "uuid": "f9233e51-29e8-5a97-ba1e-e7b5f4001e1c", 00:11:27.899 "is_configured": true, 00:11:27.899 "data_offset": 2048, 00:11:27.899 "data_size": 63488 00:11:27.899 }, 00:11:27.899 { 00:11:27.899 "name": "BaseBdev4", 00:11:27.899 "uuid": "ac5a50a6-bcb4-54ba-accb-0539ea0c3ae1", 00:11:27.899 "is_configured": true, 00:11:27.899 "data_offset": 2048, 00:11:27.899 "data_size": 63488 00:11:27.899 } 00:11:27.899 ] 00:11:27.899 }' 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.899 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.467 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:28.468 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:28.468 [2024-09-29 21:42:47.349320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.411 "name": "raid_bdev1", 00:11:29.411 "uuid": "7b7c3fc8-1292-4b83-883a-89fa4d17061e", 00:11:29.411 "strip_size_kb": 64, 00:11:29.411 "state": "online", 00:11:29.411 "raid_level": "concat", 00:11:29.411 "superblock": true, 00:11:29.411 "num_base_bdevs": 4, 00:11:29.411 "num_base_bdevs_discovered": 4, 00:11:29.411 "num_base_bdevs_operational": 4, 00:11:29.411 "base_bdevs_list": [ 00:11:29.411 { 00:11:29.411 "name": "BaseBdev1", 00:11:29.411 "uuid": "16568d1f-efc9-534d-b0f7-cfe0b2a31715", 00:11:29.411 "is_configured": true, 00:11:29.411 "data_offset": 2048, 00:11:29.411 "data_size": 63488 00:11:29.411 }, 00:11:29.411 { 00:11:29.411 "name": "BaseBdev2", 00:11:29.411 "uuid": "94eee69c-5987-55ef-9b50-4da16852cc18", 00:11:29.411 "is_configured": true, 00:11:29.411 "data_offset": 2048, 00:11:29.411 "data_size": 63488 00:11:29.411 }, 00:11:29.411 { 00:11:29.411 "name": "BaseBdev3", 00:11:29.411 "uuid": "f9233e51-29e8-5a97-ba1e-e7b5f4001e1c", 00:11:29.411 "is_configured": true, 00:11:29.411 "data_offset": 2048, 00:11:29.411 "data_size": 63488 00:11:29.411 }, 00:11:29.411 { 00:11:29.411 "name": "BaseBdev4", 00:11:29.411 "uuid": "ac5a50a6-bcb4-54ba-accb-0539ea0c3ae1", 00:11:29.411 "is_configured": true, 00:11:29.411 "data_offset": 2048, 00:11:29.411 "data_size": 63488 00:11:29.411 } 00:11:29.411 ] 00:11:29.411 }' 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.411 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.995 [2024-09-29 21:42:48.681744] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.995 [2024-09-29 21:42:48.681793] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.995 [2024-09-29 21:42:48.684399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.995 [2024-09-29 21:42:48.684466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.995 [2024-09-29 21:42:48.684516] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.995 [2024-09-29 21:42:48.684530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:29.995 { 00:11:29.995 "results": [ 00:11:29.995 { 00:11:29.995 "job": "raid_bdev1", 00:11:29.995 "core_mask": "0x1", 00:11:29.995 "workload": "randrw", 00:11:29.995 "percentage": 50, 00:11:29.995 "status": "finished", 00:11:29.995 "queue_depth": 1, 00:11:29.995 "io_size": 131072, 00:11:29.995 "runtime": 1.332996, 00:11:29.995 "iops": 14302.368499230306, 00:11:29.995 "mibps": 1787.7960624037883, 00:11:29.995 "io_failed": 1, 00:11:29.995 "io_timeout": 0, 00:11:29.995 "avg_latency_us": 98.65453810871635, 00:11:29.995 "min_latency_us": 24.929257641921396, 00:11:29.995 "max_latency_us": 1280.6707423580785 00:11:29.995 } 00:11:29.995 ], 00:11:29.995 "core_count": 1 00:11:29.995 } 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73122 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73122 ']' 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73122 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73122 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:29.995 killing process with pid 73122 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73122' 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73122 00:11:29.995 [2024-09-29 21:42:48.730589] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.995 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73122 00:11:30.253 [2024-09-29 21:42:49.078411] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hxdhK0ChG1 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:31.661 00:11:31.661 real 0m4.958s 00:11:31.661 user 0m5.665s 00:11:31.661 sys 0m0.726s 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:31.661 21:42:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.661 ************************************ 00:11:31.661 END TEST raid_write_error_test 00:11:31.661 ************************************ 00:11:31.661 21:42:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:31.661 21:42:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:31.661 21:42:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:31.661 21:42:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:31.661 21:42:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.661 ************************************ 00:11:31.661 START TEST raid_state_function_test 00:11:31.661 ************************************ 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73266 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:31.661 Process raid pid: 73266 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73266' 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73266 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73266 ']' 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.661 21:42:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.661 [2024-09-29 21:42:50.639761] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:31.661 [2024-09-29 21:42:50.639873] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.921 [2024-09-29 21:42:50.803944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.181 [2024-09-29 21:42:51.054654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.440 [2024-09-29 21:42:51.278332] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.440 [2024-09-29 21:42:51.278368] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.700 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.700 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:32.700 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.700 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.700 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.700 [2024-09-29 21:42:51.458681] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.700 [2024-09-29 21:42:51.458743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.700 [2024-09-29 21:42:51.458753] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.700 [2024-09-29 21:42:51.458763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.700 [2024-09-29 21:42:51.458769] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.700 [2024-09-29 21:42:51.458779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.700 [2024-09-29 21:42:51.458785] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.701 [2024-09-29 21:42:51.458795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.701 "name": "Existed_Raid", 00:11:32.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.701 "strip_size_kb": 0, 00:11:32.701 "state": "configuring", 00:11:32.701 "raid_level": "raid1", 00:11:32.701 "superblock": false, 00:11:32.701 "num_base_bdevs": 4, 00:11:32.701 "num_base_bdevs_discovered": 0, 00:11:32.701 "num_base_bdevs_operational": 4, 00:11:32.701 "base_bdevs_list": [ 00:11:32.701 { 00:11:32.701 "name": "BaseBdev1", 00:11:32.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.701 "is_configured": false, 00:11:32.701 "data_offset": 0, 00:11:32.701 "data_size": 0 00:11:32.701 }, 00:11:32.701 { 00:11:32.701 "name": "BaseBdev2", 00:11:32.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.701 "is_configured": false, 00:11:32.701 "data_offset": 0, 00:11:32.701 "data_size": 0 00:11:32.701 }, 00:11:32.701 { 00:11:32.701 "name": "BaseBdev3", 00:11:32.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.701 "is_configured": false, 00:11:32.701 "data_offset": 0, 00:11:32.701 "data_size": 0 00:11:32.701 }, 00:11:32.701 { 00:11:32.701 "name": "BaseBdev4", 00:11:32.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.701 "is_configured": false, 00:11:32.701 "data_offset": 0, 00:11:32.701 "data_size": 0 00:11:32.701 } 00:11:32.701 ] 00:11:32.701 }' 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.701 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.961 [2024-09-29 21:42:51.845915] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.961 [2024-09-29 21:42:51.845960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.961 [2024-09-29 21:42:51.857914] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:32.961 [2024-09-29 21:42:51.857957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:32.961 [2024-09-29 21:42:51.857965] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.961 [2024-09-29 21:42:51.857974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.961 [2024-09-29 21:42:51.857979] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.961 [2024-09-29 21:42:51.857988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.961 [2024-09-29 21:42:51.857993] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.961 [2024-09-29 21:42:51.858002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.961 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.961 [2024-09-29 21:42:51.941254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.222 BaseBdev1 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.222 [ 00:11:33.222 { 00:11:33.222 "name": "BaseBdev1", 00:11:33.222 "aliases": [ 00:11:33.222 "7a3571d2-acfc-48c2-87a7-6ebdde72aa76" 00:11:33.222 ], 00:11:33.222 "product_name": "Malloc disk", 00:11:33.222 "block_size": 512, 00:11:33.222 "num_blocks": 65536, 00:11:33.222 "uuid": "7a3571d2-acfc-48c2-87a7-6ebdde72aa76", 00:11:33.222 "assigned_rate_limits": { 00:11:33.222 "rw_ios_per_sec": 0, 00:11:33.222 "rw_mbytes_per_sec": 0, 00:11:33.222 "r_mbytes_per_sec": 0, 00:11:33.222 "w_mbytes_per_sec": 0 00:11:33.222 }, 00:11:33.222 "claimed": true, 00:11:33.222 "claim_type": "exclusive_write", 00:11:33.222 "zoned": false, 00:11:33.222 "supported_io_types": { 00:11:33.222 "read": true, 00:11:33.222 "write": true, 00:11:33.222 "unmap": true, 00:11:33.222 "flush": true, 00:11:33.222 "reset": true, 00:11:33.222 "nvme_admin": false, 00:11:33.222 "nvme_io": false, 00:11:33.222 "nvme_io_md": false, 00:11:33.222 "write_zeroes": true, 00:11:33.222 "zcopy": true, 00:11:33.222 "get_zone_info": false, 00:11:33.222 "zone_management": false, 00:11:33.222 "zone_append": false, 00:11:33.222 "compare": false, 00:11:33.222 "compare_and_write": false, 00:11:33.222 "abort": true, 00:11:33.222 "seek_hole": false, 00:11:33.222 "seek_data": false, 00:11:33.222 "copy": true, 00:11:33.222 "nvme_iov_md": false 00:11:33.222 }, 00:11:33.222 "memory_domains": [ 00:11:33.222 { 00:11:33.222 "dma_device_id": "system", 00:11:33.222 "dma_device_type": 1 00:11:33.222 }, 00:11:33.222 { 00:11:33.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.222 "dma_device_type": 2 00:11:33.222 } 00:11:33.222 ], 00:11:33.222 "driver_specific": {} 00:11:33.222 } 00:11:33.222 ] 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.222 21:42:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.222 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.222 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.222 "name": "Existed_Raid", 00:11:33.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.222 "strip_size_kb": 0, 00:11:33.222 "state": "configuring", 00:11:33.222 "raid_level": "raid1", 00:11:33.222 "superblock": false, 00:11:33.222 "num_base_bdevs": 4, 00:11:33.222 "num_base_bdevs_discovered": 1, 00:11:33.222 "num_base_bdevs_operational": 4, 00:11:33.222 "base_bdevs_list": [ 00:11:33.222 { 00:11:33.222 "name": "BaseBdev1", 00:11:33.222 "uuid": "7a3571d2-acfc-48c2-87a7-6ebdde72aa76", 00:11:33.222 "is_configured": true, 00:11:33.222 "data_offset": 0, 00:11:33.222 "data_size": 65536 00:11:33.222 }, 00:11:33.222 { 00:11:33.222 "name": "BaseBdev2", 00:11:33.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.222 "is_configured": false, 00:11:33.222 "data_offset": 0, 00:11:33.222 "data_size": 0 00:11:33.222 }, 00:11:33.222 { 00:11:33.222 "name": "BaseBdev3", 00:11:33.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.222 "is_configured": false, 00:11:33.222 "data_offset": 0, 00:11:33.222 "data_size": 0 00:11:33.222 }, 00:11:33.222 { 00:11:33.222 "name": "BaseBdev4", 00:11:33.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.222 "is_configured": false, 00:11:33.222 "data_offset": 0, 00:11:33.222 "data_size": 0 00:11:33.222 } 00:11:33.222 ] 00:11:33.222 }' 00:11:33.222 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.222 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.483 [2024-09-29 21:42:52.428425] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.483 [2024-09-29 21:42:52.428521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.483 [2024-09-29 21:42:52.436464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.483 [2024-09-29 21:42:52.438625] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.483 [2024-09-29 21:42:52.438720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.483 [2024-09-29 21:42:52.438751] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:33.483 [2024-09-29 21:42:52.438774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:33.483 [2024-09-29 21:42:52.438793] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:33.483 [2024-09-29 21:42:52.438812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.483 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.743 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.743 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.743 "name": "Existed_Raid", 00:11:33.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.743 "strip_size_kb": 0, 00:11:33.743 "state": "configuring", 00:11:33.743 "raid_level": "raid1", 00:11:33.743 "superblock": false, 00:11:33.743 "num_base_bdevs": 4, 00:11:33.743 "num_base_bdevs_discovered": 1, 00:11:33.743 "num_base_bdevs_operational": 4, 00:11:33.743 "base_bdevs_list": [ 00:11:33.743 { 00:11:33.743 "name": "BaseBdev1", 00:11:33.743 "uuid": "7a3571d2-acfc-48c2-87a7-6ebdde72aa76", 00:11:33.743 "is_configured": true, 00:11:33.743 "data_offset": 0, 00:11:33.743 "data_size": 65536 00:11:33.743 }, 00:11:33.743 { 00:11:33.743 "name": "BaseBdev2", 00:11:33.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.743 "is_configured": false, 00:11:33.743 "data_offset": 0, 00:11:33.743 "data_size": 0 00:11:33.743 }, 00:11:33.743 { 00:11:33.743 "name": "BaseBdev3", 00:11:33.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.743 "is_configured": false, 00:11:33.743 "data_offset": 0, 00:11:33.743 "data_size": 0 00:11:33.743 }, 00:11:33.743 { 00:11:33.743 "name": "BaseBdev4", 00:11:33.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.743 "is_configured": false, 00:11:33.743 "data_offset": 0, 00:11:33.743 "data_size": 0 00:11:33.743 } 00:11:33.743 ] 00:11:33.743 }' 00:11:33.743 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.743 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.003 [2024-09-29 21:42:52.924298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:34.003 BaseBdev2 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.003 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.003 [ 00:11:34.003 { 00:11:34.004 "name": "BaseBdev2", 00:11:34.004 "aliases": [ 00:11:34.004 "6576aecd-f4fb-4cad-a813-01e93b7f77c7" 00:11:34.004 ], 00:11:34.004 "product_name": "Malloc disk", 00:11:34.004 "block_size": 512, 00:11:34.004 "num_blocks": 65536, 00:11:34.004 "uuid": "6576aecd-f4fb-4cad-a813-01e93b7f77c7", 00:11:34.004 "assigned_rate_limits": { 00:11:34.004 "rw_ios_per_sec": 0, 00:11:34.004 "rw_mbytes_per_sec": 0, 00:11:34.004 "r_mbytes_per_sec": 0, 00:11:34.004 "w_mbytes_per_sec": 0 00:11:34.004 }, 00:11:34.004 "claimed": true, 00:11:34.004 "claim_type": "exclusive_write", 00:11:34.004 "zoned": false, 00:11:34.004 "supported_io_types": { 00:11:34.004 "read": true, 00:11:34.004 "write": true, 00:11:34.004 "unmap": true, 00:11:34.004 "flush": true, 00:11:34.004 "reset": true, 00:11:34.004 "nvme_admin": false, 00:11:34.004 "nvme_io": false, 00:11:34.004 "nvme_io_md": false, 00:11:34.004 "write_zeroes": true, 00:11:34.004 "zcopy": true, 00:11:34.004 "get_zone_info": false, 00:11:34.004 "zone_management": false, 00:11:34.004 "zone_append": false, 00:11:34.004 "compare": false, 00:11:34.004 "compare_and_write": false, 00:11:34.004 "abort": true, 00:11:34.004 "seek_hole": false, 00:11:34.004 "seek_data": false, 00:11:34.004 "copy": true, 00:11:34.004 "nvme_iov_md": false 00:11:34.004 }, 00:11:34.004 "memory_domains": [ 00:11:34.004 { 00:11:34.004 "dma_device_id": "system", 00:11:34.004 "dma_device_type": 1 00:11:34.004 }, 00:11:34.004 { 00:11:34.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.004 "dma_device_type": 2 00:11:34.004 } 00:11:34.004 ], 00:11:34.004 "driver_specific": {} 00:11:34.004 } 00:11:34.004 ] 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.004 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.264 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.264 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.264 "name": "Existed_Raid", 00:11:34.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.264 "strip_size_kb": 0, 00:11:34.264 "state": "configuring", 00:11:34.264 "raid_level": "raid1", 00:11:34.264 "superblock": false, 00:11:34.264 "num_base_bdevs": 4, 00:11:34.264 "num_base_bdevs_discovered": 2, 00:11:34.264 "num_base_bdevs_operational": 4, 00:11:34.264 "base_bdevs_list": [ 00:11:34.264 { 00:11:34.264 "name": "BaseBdev1", 00:11:34.264 "uuid": "7a3571d2-acfc-48c2-87a7-6ebdde72aa76", 00:11:34.264 "is_configured": true, 00:11:34.264 "data_offset": 0, 00:11:34.264 "data_size": 65536 00:11:34.264 }, 00:11:34.264 { 00:11:34.264 "name": "BaseBdev2", 00:11:34.264 "uuid": "6576aecd-f4fb-4cad-a813-01e93b7f77c7", 00:11:34.264 "is_configured": true, 00:11:34.264 "data_offset": 0, 00:11:34.264 "data_size": 65536 00:11:34.264 }, 00:11:34.264 { 00:11:34.264 "name": "BaseBdev3", 00:11:34.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.264 "is_configured": false, 00:11:34.264 "data_offset": 0, 00:11:34.264 "data_size": 0 00:11:34.264 }, 00:11:34.264 { 00:11:34.264 "name": "BaseBdev4", 00:11:34.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.264 "is_configured": false, 00:11:34.265 "data_offset": 0, 00:11:34.265 "data_size": 0 00:11:34.265 } 00:11:34.265 ] 00:11:34.265 }' 00:11:34.265 21:42:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.265 21:42:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.525 [2024-09-29 21:42:53.464254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:34.525 BaseBdev3 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.525 [ 00:11:34.525 { 00:11:34.525 "name": "BaseBdev3", 00:11:34.525 "aliases": [ 00:11:34.525 "ed79a48d-438b-4cbd-8d42-1f645b36224d" 00:11:34.525 ], 00:11:34.525 "product_name": "Malloc disk", 00:11:34.525 "block_size": 512, 00:11:34.525 "num_blocks": 65536, 00:11:34.525 "uuid": "ed79a48d-438b-4cbd-8d42-1f645b36224d", 00:11:34.525 "assigned_rate_limits": { 00:11:34.525 "rw_ios_per_sec": 0, 00:11:34.525 "rw_mbytes_per_sec": 0, 00:11:34.525 "r_mbytes_per_sec": 0, 00:11:34.525 "w_mbytes_per_sec": 0 00:11:34.525 }, 00:11:34.525 "claimed": true, 00:11:34.525 "claim_type": "exclusive_write", 00:11:34.525 "zoned": false, 00:11:34.525 "supported_io_types": { 00:11:34.525 "read": true, 00:11:34.525 "write": true, 00:11:34.525 "unmap": true, 00:11:34.525 "flush": true, 00:11:34.525 "reset": true, 00:11:34.525 "nvme_admin": false, 00:11:34.525 "nvme_io": false, 00:11:34.525 "nvme_io_md": false, 00:11:34.525 "write_zeroes": true, 00:11:34.525 "zcopy": true, 00:11:34.525 "get_zone_info": false, 00:11:34.525 "zone_management": false, 00:11:34.525 "zone_append": false, 00:11:34.525 "compare": false, 00:11:34.525 "compare_and_write": false, 00:11:34.525 "abort": true, 00:11:34.525 "seek_hole": false, 00:11:34.525 "seek_data": false, 00:11:34.525 "copy": true, 00:11:34.525 "nvme_iov_md": false 00:11:34.525 }, 00:11:34.525 "memory_domains": [ 00:11:34.525 { 00:11:34.525 "dma_device_id": "system", 00:11:34.525 "dma_device_type": 1 00:11:34.525 }, 00:11:34.525 { 00:11:34.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.525 "dma_device_type": 2 00:11:34.525 } 00:11:34.525 ], 00:11:34.525 "driver_specific": {} 00:11:34.525 } 00:11:34.525 ] 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.525 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.785 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.785 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.785 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.785 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.785 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.785 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.785 "name": "Existed_Raid", 00:11:34.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.785 "strip_size_kb": 0, 00:11:34.785 "state": "configuring", 00:11:34.785 "raid_level": "raid1", 00:11:34.785 "superblock": false, 00:11:34.785 "num_base_bdevs": 4, 00:11:34.785 "num_base_bdevs_discovered": 3, 00:11:34.785 "num_base_bdevs_operational": 4, 00:11:34.785 "base_bdevs_list": [ 00:11:34.785 { 00:11:34.785 "name": "BaseBdev1", 00:11:34.785 "uuid": "7a3571d2-acfc-48c2-87a7-6ebdde72aa76", 00:11:34.785 "is_configured": true, 00:11:34.785 "data_offset": 0, 00:11:34.785 "data_size": 65536 00:11:34.785 }, 00:11:34.785 { 00:11:34.785 "name": "BaseBdev2", 00:11:34.785 "uuid": "6576aecd-f4fb-4cad-a813-01e93b7f77c7", 00:11:34.785 "is_configured": true, 00:11:34.785 "data_offset": 0, 00:11:34.785 "data_size": 65536 00:11:34.785 }, 00:11:34.785 { 00:11:34.785 "name": "BaseBdev3", 00:11:34.785 "uuid": "ed79a48d-438b-4cbd-8d42-1f645b36224d", 00:11:34.785 "is_configured": true, 00:11:34.785 "data_offset": 0, 00:11:34.785 "data_size": 65536 00:11:34.785 }, 00:11:34.786 { 00:11:34.786 "name": "BaseBdev4", 00:11:34.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.786 "is_configured": false, 00:11:34.786 "data_offset": 0, 00:11:34.786 "data_size": 0 00:11:34.786 } 00:11:34.786 ] 00:11:34.786 }' 00:11:34.786 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.786 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.045 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.045 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.045 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.045 [2024-09-29 21:42:53.965776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.045 [2024-09-29 21:42:53.965831] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.045 [2024-09-29 21:42:53.965842] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:35.045 [2024-09-29 21:42:53.966165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:35.045 [2024-09-29 21:42:53.966363] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.046 [2024-09-29 21:42:53.966382] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:35.046 [2024-09-29 21:42:53.966688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.046 BaseBdev4 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.046 21:42:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.046 [ 00:11:35.046 { 00:11:35.046 "name": "BaseBdev4", 00:11:35.046 "aliases": [ 00:11:35.046 "5fdce48e-0c9f-4514-a92d-96e9e99bf5b0" 00:11:35.046 ], 00:11:35.046 "product_name": "Malloc disk", 00:11:35.046 "block_size": 512, 00:11:35.046 "num_blocks": 65536, 00:11:35.046 "uuid": "5fdce48e-0c9f-4514-a92d-96e9e99bf5b0", 00:11:35.046 "assigned_rate_limits": { 00:11:35.046 "rw_ios_per_sec": 0, 00:11:35.046 "rw_mbytes_per_sec": 0, 00:11:35.046 "r_mbytes_per_sec": 0, 00:11:35.046 "w_mbytes_per_sec": 0 00:11:35.046 }, 00:11:35.046 "claimed": true, 00:11:35.046 "claim_type": "exclusive_write", 00:11:35.046 "zoned": false, 00:11:35.046 "supported_io_types": { 00:11:35.046 "read": true, 00:11:35.046 "write": true, 00:11:35.046 "unmap": true, 00:11:35.046 "flush": true, 00:11:35.046 "reset": true, 00:11:35.046 "nvme_admin": false, 00:11:35.046 "nvme_io": false, 00:11:35.046 "nvme_io_md": false, 00:11:35.046 "write_zeroes": true, 00:11:35.046 "zcopy": true, 00:11:35.046 "get_zone_info": false, 00:11:35.046 "zone_management": false, 00:11:35.046 "zone_append": false, 00:11:35.046 "compare": false, 00:11:35.046 "compare_and_write": false, 00:11:35.046 "abort": true, 00:11:35.046 "seek_hole": false, 00:11:35.046 "seek_data": false, 00:11:35.046 "copy": true, 00:11:35.046 "nvme_iov_md": false 00:11:35.046 }, 00:11:35.046 "memory_domains": [ 00:11:35.046 { 00:11:35.046 "dma_device_id": "system", 00:11:35.046 "dma_device_type": 1 00:11:35.046 }, 00:11:35.046 { 00:11:35.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.046 "dma_device_type": 2 00:11:35.046 } 00:11:35.046 ], 00:11:35.046 "driver_specific": {} 00:11:35.046 } 00:11:35.046 ] 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.046 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.305 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.305 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.305 "name": "Existed_Raid", 00:11:35.305 "uuid": "220e5f15-f23c-48a9-a29f-a06b0d01626d", 00:11:35.305 "strip_size_kb": 0, 00:11:35.306 "state": "online", 00:11:35.306 "raid_level": "raid1", 00:11:35.306 "superblock": false, 00:11:35.306 "num_base_bdevs": 4, 00:11:35.306 "num_base_bdevs_discovered": 4, 00:11:35.306 "num_base_bdevs_operational": 4, 00:11:35.306 "base_bdevs_list": [ 00:11:35.306 { 00:11:35.306 "name": "BaseBdev1", 00:11:35.306 "uuid": "7a3571d2-acfc-48c2-87a7-6ebdde72aa76", 00:11:35.306 "is_configured": true, 00:11:35.306 "data_offset": 0, 00:11:35.306 "data_size": 65536 00:11:35.306 }, 00:11:35.306 { 00:11:35.306 "name": "BaseBdev2", 00:11:35.306 "uuid": "6576aecd-f4fb-4cad-a813-01e93b7f77c7", 00:11:35.306 "is_configured": true, 00:11:35.306 "data_offset": 0, 00:11:35.306 "data_size": 65536 00:11:35.306 }, 00:11:35.306 { 00:11:35.306 "name": "BaseBdev3", 00:11:35.306 "uuid": "ed79a48d-438b-4cbd-8d42-1f645b36224d", 00:11:35.306 "is_configured": true, 00:11:35.306 "data_offset": 0, 00:11:35.306 "data_size": 65536 00:11:35.306 }, 00:11:35.306 { 00:11:35.306 "name": "BaseBdev4", 00:11:35.306 "uuid": "5fdce48e-0c9f-4514-a92d-96e9e99bf5b0", 00:11:35.306 "is_configured": true, 00:11:35.306 "data_offset": 0, 00:11:35.306 "data_size": 65536 00:11:35.306 } 00:11:35.306 ] 00:11:35.306 }' 00:11:35.306 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.306 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.565 [2024-09-29 21:42:54.481248] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.565 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.565 "name": "Existed_Raid", 00:11:35.565 "aliases": [ 00:11:35.565 "220e5f15-f23c-48a9-a29f-a06b0d01626d" 00:11:35.565 ], 00:11:35.565 "product_name": "Raid Volume", 00:11:35.565 "block_size": 512, 00:11:35.565 "num_blocks": 65536, 00:11:35.565 "uuid": "220e5f15-f23c-48a9-a29f-a06b0d01626d", 00:11:35.565 "assigned_rate_limits": { 00:11:35.565 "rw_ios_per_sec": 0, 00:11:35.565 "rw_mbytes_per_sec": 0, 00:11:35.565 "r_mbytes_per_sec": 0, 00:11:35.565 "w_mbytes_per_sec": 0 00:11:35.565 }, 00:11:35.565 "claimed": false, 00:11:35.565 "zoned": false, 00:11:35.565 "supported_io_types": { 00:11:35.565 "read": true, 00:11:35.565 "write": true, 00:11:35.565 "unmap": false, 00:11:35.565 "flush": false, 00:11:35.565 "reset": true, 00:11:35.565 "nvme_admin": false, 00:11:35.565 "nvme_io": false, 00:11:35.565 "nvme_io_md": false, 00:11:35.565 "write_zeroes": true, 00:11:35.565 "zcopy": false, 00:11:35.565 "get_zone_info": false, 00:11:35.565 "zone_management": false, 00:11:35.565 "zone_append": false, 00:11:35.565 "compare": false, 00:11:35.565 "compare_and_write": false, 00:11:35.565 "abort": false, 00:11:35.565 "seek_hole": false, 00:11:35.565 "seek_data": false, 00:11:35.565 "copy": false, 00:11:35.565 "nvme_iov_md": false 00:11:35.565 }, 00:11:35.565 "memory_domains": [ 00:11:35.565 { 00:11:35.565 "dma_device_id": "system", 00:11:35.565 "dma_device_type": 1 00:11:35.565 }, 00:11:35.565 { 00:11:35.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.565 "dma_device_type": 2 00:11:35.565 }, 00:11:35.565 { 00:11:35.565 "dma_device_id": "system", 00:11:35.566 "dma_device_type": 1 00:11:35.566 }, 00:11:35.566 { 00:11:35.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.566 "dma_device_type": 2 00:11:35.566 }, 00:11:35.566 { 00:11:35.566 "dma_device_id": "system", 00:11:35.566 "dma_device_type": 1 00:11:35.566 }, 00:11:35.566 { 00:11:35.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.566 "dma_device_type": 2 00:11:35.566 }, 00:11:35.566 { 00:11:35.566 "dma_device_id": "system", 00:11:35.566 "dma_device_type": 1 00:11:35.566 }, 00:11:35.566 { 00:11:35.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.566 "dma_device_type": 2 00:11:35.566 } 00:11:35.566 ], 00:11:35.566 "driver_specific": { 00:11:35.566 "raid": { 00:11:35.566 "uuid": "220e5f15-f23c-48a9-a29f-a06b0d01626d", 00:11:35.566 "strip_size_kb": 0, 00:11:35.566 "state": "online", 00:11:35.566 "raid_level": "raid1", 00:11:35.566 "superblock": false, 00:11:35.566 "num_base_bdevs": 4, 00:11:35.566 "num_base_bdevs_discovered": 4, 00:11:35.566 "num_base_bdevs_operational": 4, 00:11:35.566 "base_bdevs_list": [ 00:11:35.566 { 00:11:35.566 "name": "BaseBdev1", 00:11:35.566 "uuid": "7a3571d2-acfc-48c2-87a7-6ebdde72aa76", 00:11:35.566 "is_configured": true, 00:11:35.566 "data_offset": 0, 00:11:35.566 "data_size": 65536 00:11:35.566 }, 00:11:35.566 { 00:11:35.566 "name": "BaseBdev2", 00:11:35.566 "uuid": "6576aecd-f4fb-4cad-a813-01e93b7f77c7", 00:11:35.566 "is_configured": true, 00:11:35.566 "data_offset": 0, 00:11:35.566 "data_size": 65536 00:11:35.566 }, 00:11:35.566 { 00:11:35.566 "name": "BaseBdev3", 00:11:35.566 "uuid": "ed79a48d-438b-4cbd-8d42-1f645b36224d", 00:11:35.566 "is_configured": true, 00:11:35.566 "data_offset": 0, 00:11:35.566 "data_size": 65536 00:11:35.566 }, 00:11:35.566 { 00:11:35.566 "name": "BaseBdev4", 00:11:35.566 "uuid": "5fdce48e-0c9f-4514-a92d-96e9e99bf5b0", 00:11:35.566 "is_configured": true, 00:11:35.566 "data_offset": 0, 00:11:35.566 "data_size": 65536 00:11:35.566 } 00:11:35.566 ] 00:11:35.566 } 00:11:35.566 } 00:11:35.566 }' 00:11:35.566 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.825 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:35.825 BaseBdev2 00:11:35.825 BaseBdev3 00:11:35.825 BaseBdev4' 00:11:35.825 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.826 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.826 [2024-09-29 21:42:54.792431] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.086 "name": "Existed_Raid", 00:11:36.086 "uuid": "220e5f15-f23c-48a9-a29f-a06b0d01626d", 00:11:36.086 "strip_size_kb": 0, 00:11:36.086 "state": "online", 00:11:36.086 "raid_level": "raid1", 00:11:36.086 "superblock": false, 00:11:36.086 "num_base_bdevs": 4, 00:11:36.086 "num_base_bdevs_discovered": 3, 00:11:36.086 "num_base_bdevs_operational": 3, 00:11:36.086 "base_bdevs_list": [ 00:11:36.086 { 00:11:36.086 "name": null, 00:11:36.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.086 "is_configured": false, 00:11:36.086 "data_offset": 0, 00:11:36.086 "data_size": 65536 00:11:36.086 }, 00:11:36.086 { 00:11:36.086 "name": "BaseBdev2", 00:11:36.086 "uuid": "6576aecd-f4fb-4cad-a813-01e93b7f77c7", 00:11:36.086 "is_configured": true, 00:11:36.086 "data_offset": 0, 00:11:36.086 "data_size": 65536 00:11:36.086 }, 00:11:36.086 { 00:11:36.086 "name": "BaseBdev3", 00:11:36.086 "uuid": "ed79a48d-438b-4cbd-8d42-1f645b36224d", 00:11:36.086 "is_configured": true, 00:11:36.086 "data_offset": 0, 00:11:36.086 "data_size": 65536 00:11:36.086 }, 00:11:36.086 { 00:11:36.086 "name": "BaseBdev4", 00:11:36.086 "uuid": "5fdce48e-0c9f-4514-a92d-96e9e99bf5b0", 00:11:36.086 "is_configured": true, 00:11:36.086 "data_offset": 0, 00:11:36.086 "data_size": 65536 00:11:36.086 } 00:11:36.086 ] 00:11:36.086 }' 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.086 21:42:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.345 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.345 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.345 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.345 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.345 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.345 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.605 [2024-09-29 21:42:55.371393] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.605 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.605 [2024-09-29 21:42:55.520059] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.865 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.865 [2024-09-29 21:42:55.675584] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:36.865 [2024-09-29 21:42:55.675744] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.865 [2024-09-29 21:42:55.777230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.865 [2024-09-29 21:42:55.777376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.866 [2024-09-29 21:42:55.777421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.866 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.126 BaseBdev2 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.126 [ 00:11:37.126 { 00:11:37.126 "name": "BaseBdev2", 00:11:37.126 "aliases": [ 00:11:37.126 "c8384b6e-c456-490e-b3de-2de20ce11410" 00:11:37.126 ], 00:11:37.126 "product_name": "Malloc disk", 00:11:37.126 "block_size": 512, 00:11:37.126 "num_blocks": 65536, 00:11:37.126 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:37.126 "assigned_rate_limits": { 00:11:37.126 "rw_ios_per_sec": 0, 00:11:37.126 "rw_mbytes_per_sec": 0, 00:11:37.126 "r_mbytes_per_sec": 0, 00:11:37.126 "w_mbytes_per_sec": 0 00:11:37.126 }, 00:11:37.126 "claimed": false, 00:11:37.126 "zoned": false, 00:11:37.126 "supported_io_types": { 00:11:37.126 "read": true, 00:11:37.126 "write": true, 00:11:37.126 "unmap": true, 00:11:37.126 "flush": true, 00:11:37.126 "reset": true, 00:11:37.126 "nvme_admin": false, 00:11:37.126 "nvme_io": false, 00:11:37.126 "nvme_io_md": false, 00:11:37.126 "write_zeroes": true, 00:11:37.126 "zcopy": true, 00:11:37.126 "get_zone_info": false, 00:11:37.126 "zone_management": false, 00:11:37.126 "zone_append": false, 00:11:37.126 "compare": false, 00:11:37.126 "compare_and_write": false, 00:11:37.126 "abort": true, 00:11:37.126 "seek_hole": false, 00:11:37.126 "seek_data": false, 00:11:37.126 "copy": true, 00:11:37.126 "nvme_iov_md": false 00:11:37.126 }, 00:11:37.126 "memory_domains": [ 00:11:37.126 { 00:11:37.126 "dma_device_id": "system", 00:11:37.126 "dma_device_type": 1 00:11:37.126 }, 00:11:37.126 { 00:11:37.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.126 "dma_device_type": 2 00:11:37.126 } 00:11:37.126 ], 00:11:37.126 "driver_specific": {} 00:11:37.126 } 00:11:37.126 ] 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.126 BaseBdev3 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.126 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.126 [ 00:11:37.126 { 00:11:37.126 "name": "BaseBdev3", 00:11:37.126 "aliases": [ 00:11:37.126 "d6f3f603-3f2c-4baa-9f94-b31d3a583c89" 00:11:37.126 ], 00:11:37.126 "product_name": "Malloc disk", 00:11:37.126 "block_size": 512, 00:11:37.126 "num_blocks": 65536, 00:11:37.126 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:37.126 "assigned_rate_limits": { 00:11:37.126 "rw_ios_per_sec": 0, 00:11:37.126 "rw_mbytes_per_sec": 0, 00:11:37.126 "r_mbytes_per_sec": 0, 00:11:37.126 "w_mbytes_per_sec": 0 00:11:37.126 }, 00:11:37.126 "claimed": false, 00:11:37.126 "zoned": false, 00:11:37.126 "supported_io_types": { 00:11:37.126 "read": true, 00:11:37.126 "write": true, 00:11:37.126 "unmap": true, 00:11:37.126 "flush": true, 00:11:37.126 "reset": true, 00:11:37.126 "nvme_admin": false, 00:11:37.126 "nvme_io": false, 00:11:37.126 "nvme_io_md": false, 00:11:37.126 "write_zeroes": true, 00:11:37.126 "zcopy": true, 00:11:37.126 "get_zone_info": false, 00:11:37.126 "zone_management": false, 00:11:37.126 "zone_append": false, 00:11:37.126 "compare": false, 00:11:37.126 "compare_and_write": false, 00:11:37.126 "abort": true, 00:11:37.126 "seek_hole": false, 00:11:37.126 "seek_data": false, 00:11:37.126 "copy": true, 00:11:37.126 "nvme_iov_md": false 00:11:37.126 }, 00:11:37.126 "memory_domains": [ 00:11:37.126 { 00:11:37.126 "dma_device_id": "system", 00:11:37.126 "dma_device_type": 1 00:11:37.127 }, 00:11:37.127 { 00:11:37.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.127 "dma_device_type": 2 00:11:37.127 } 00:11:37.127 ], 00:11:37.127 "driver_specific": {} 00:11:37.127 } 00:11:37.127 ] 00:11:37.127 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.127 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:37.127 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.127 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.127 21:42:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:37.127 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.127 21:42:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.127 BaseBdev4 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.127 [ 00:11:37.127 { 00:11:37.127 "name": "BaseBdev4", 00:11:37.127 "aliases": [ 00:11:37.127 "9ee6e40c-93c0-4ef2-a898-1090978dfc59" 00:11:37.127 ], 00:11:37.127 "product_name": "Malloc disk", 00:11:37.127 "block_size": 512, 00:11:37.127 "num_blocks": 65536, 00:11:37.127 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:37.127 "assigned_rate_limits": { 00:11:37.127 "rw_ios_per_sec": 0, 00:11:37.127 "rw_mbytes_per_sec": 0, 00:11:37.127 "r_mbytes_per_sec": 0, 00:11:37.127 "w_mbytes_per_sec": 0 00:11:37.127 }, 00:11:37.127 "claimed": false, 00:11:37.127 "zoned": false, 00:11:37.127 "supported_io_types": { 00:11:37.127 "read": true, 00:11:37.127 "write": true, 00:11:37.127 "unmap": true, 00:11:37.127 "flush": true, 00:11:37.127 "reset": true, 00:11:37.127 "nvme_admin": false, 00:11:37.127 "nvme_io": false, 00:11:37.127 "nvme_io_md": false, 00:11:37.127 "write_zeroes": true, 00:11:37.127 "zcopy": true, 00:11:37.127 "get_zone_info": false, 00:11:37.127 "zone_management": false, 00:11:37.127 "zone_append": false, 00:11:37.127 "compare": false, 00:11:37.127 "compare_and_write": false, 00:11:37.127 "abort": true, 00:11:37.127 "seek_hole": false, 00:11:37.127 "seek_data": false, 00:11:37.127 "copy": true, 00:11:37.127 "nvme_iov_md": false 00:11:37.127 }, 00:11:37.127 "memory_domains": [ 00:11:37.127 { 00:11:37.127 "dma_device_id": "system", 00:11:37.127 "dma_device_type": 1 00:11:37.127 }, 00:11:37.127 { 00:11:37.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.127 "dma_device_type": 2 00:11:37.127 } 00:11:37.127 ], 00:11:37.127 "driver_specific": {} 00:11:37.127 } 00:11:37.127 ] 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.127 [2024-09-29 21:42:56.070471] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:37.127 [2024-09-29 21:42:56.070562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:37.127 [2024-09-29 21:42:56.070602] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.127 [2024-09-29 21:42:56.072700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.127 [2024-09-29 21:42:56.072747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.127 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.387 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.387 "name": "Existed_Raid", 00:11:37.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.387 "strip_size_kb": 0, 00:11:37.387 "state": "configuring", 00:11:37.387 "raid_level": "raid1", 00:11:37.387 "superblock": false, 00:11:37.387 "num_base_bdevs": 4, 00:11:37.387 "num_base_bdevs_discovered": 3, 00:11:37.387 "num_base_bdevs_operational": 4, 00:11:37.387 "base_bdevs_list": [ 00:11:37.387 { 00:11:37.387 "name": "BaseBdev1", 00:11:37.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.387 "is_configured": false, 00:11:37.387 "data_offset": 0, 00:11:37.387 "data_size": 0 00:11:37.387 }, 00:11:37.387 { 00:11:37.387 "name": "BaseBdev2", 00:11:37.387 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:37.387 "is_configured": true, 00:11:37.387 "data_offset": 0, 00:11:37.387 "data_size": 65536 00:11:37.387 }, 00:11:37.387 { 00:11:37.387 "name": "BaseBdev3", 00:11:37.387 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:37.387 "is_configured": true, 00:11:37.387 "data_offset": 0, 00:11:37.387 "data_size": 65536 00:11:37.387 }, 00:11:37.387 { 00:11:37.387 "name": "BaseBdev4", 00:11:37.387 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:37.387 "is_configured": true, 00:11:37.387 "data_offset": 0, 00:11:37.387 "data_size": 65536 00:11:37.387 } 00:11:37.387 ] 00:11:37.387 }' 00:11:37.387 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.387 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.646 [2024-09-29 21:42:56.489753] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.646 "name": "Existed_Raid", 00:11:37.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.646 "strip_size_kb": 0, 00:11:37.646 "state": "configuring", 00:11:37.646 "raid_level": "raid1", 00:11:37.646 "superblock": false, 00:11:37.646 "num_base_bdevs": 4, 00:11:37.646 "num_base_bdevs_discovered": 2, 00:11:37.646 "num_base_bdevs_operational": 4, 00:11:37.646 "base_bdevs_list": [ 00:11:37.646 { 00:11:37.646 "name": "BaseBdev1", 00:11:37.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.646 "is_configured": false, 00:11:37.646 "data_offset": 0, 00:11:37.646 "data_size": 0 00:11:37.646 }, 00:11:37.646 { 00:11:37.646 "name": null, 00:11:37.646 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:37.646 "is_configured": false, 00:11:37.646 "data_offset": 0, 00:11:37.646 "data_size": 65536 00:11:37.646 }, 00:11:37.646 { 00:11:37.646 "name": "BaseBdev3", 00:11:37.646 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:37.646 "is_configured": true, 00:11:37.646 "data_offset": 0, 00:11:37.646 "data_size": 65536 00:11:37.646 }, 00:11:37.646 { 00:11:37.646 "name": "BaseBdev4", 00:11:37.646 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:37.646 "is_configured": true, 00:11:37.646 "data_offset": 0, 00:11:37.646 "data_size": 65536 00:11:37.646 } 00:11:37.646 ] 00:11:37.646 }' 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.646 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.216 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.216 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.216 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.216 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.216 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.216 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:38.216 21:42:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.216 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.216 21:42:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.216 [2024-09-29 21:42:57.006007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.216 BaseBdev1 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.216 [ 00:11:38.216 { 00:11:38.216 "name": "BaseBdev1", 00:11:38.216 "aliases": [ 00:11:38.216 "b8d29ab6-d196-4991-a88f-afedc2e9e4d6" 00:11:38.216 ], 00:11:38.216 "product_name": "Malloc disk", 00:11:38.216 "block_size": 512, 00:11:38.216 "num_blocks": 65536, 00:11:38.216 "uuid": "b8d29ab6-d196-4991-a88f-afedc2e9e4d6", 00:11:38.216 "assigned_rate_limits": { 00:11:38.216 "rw_ios_per_sec": 0, 00:11:38.216 "rw_mbytes_per_sec": 0, 00:11:38.216 "r_mbytes_per_sec": 0, 00:11:38.216 "w_mbytes_per_sec": 0 00:11:38.216 }, 00:11:38.216 "claimed": true, 00:11:38.216 "claim_type": "exclusive_write", 00:11:38.216 "zoned": false, 00:11:38.216 "supported_io_types": { 00:11:38.216 "read": true, 00:11:38.216 "write": true, 00:11:38.216 "unmap": true, 00:11:38.216 "flush": true, 00:11:38.216 "reset": true, 00:11:38.216 "nvme_admin": false, 00:11:38.216 "nvme_io": false, 00:11:38.216 "nvme_io_md": false, 00:11:38.216 "write_zeroes": true, 00:11:38.216 "zcopy": true, 00:11:38.216 "get_zone_info": false, 00:11:38.216 "zone_management": false, 00:11:38.216 "zone_append": false, 00:11:38.216 "compare": false, 00:11:38.216 "compare_and_write": false, 00:11:38.216 "abort": true, 00:11:38.216 "seek_hole": false, 00:11:38.216 "seek_data": false, 00:11:38.216 "copy": true, 00:11:38.216 "nvme_iov_md": false 00:11:38.216 }, 00:11:38.216 "memory_domains": [ 00:11:38.216 { 00:11:38.216 "dma_device_id": "system", 00:11:38.216 "dma_device_type": 1 00:11:38.216 }, 00:11:38.216 { 00:11:38.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.216 "dma_device_type": 2 00:11:38.216 } 00:11:38.216 ], 00:11:38.216 "driver_specific": {} 00:11:38.216 } 00:11:38.216 ] 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.216 "name": "Existed_Raid", 00:11:38.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.216 "strip_size_kb": 0, 00:11:38.216 "state": "configuring", 00:11:38.216 "raid_level": "raid1", 00:11:38.216 "superblock": false, 00:11:38.216 "num_base_bdevs": 4, 00:11:38.216 "num_base_bdevs_discovered": 3, 00:11:38.216 "num_base_bdevs_operational": 4, 00:11:38.216 "base_bdevs_list": [ 00:11:38.216 { 00:11:38.216 "name": "BaseBdev1", 00:11:38.216 "uuid": "b8d29ab6-d196-4991-a88f-afedc2e9e4d6", 00:11:38.216 "is_configured": true, 00:11:38.216 "data_offset": 0, 00:11:38.216 "data_size": 65536 00:11:38.216 }, 00:11:38.216 { 00:11:38.216 "name": null, 00:11:38.216 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:38.216 "is_configured": false, 00:11:38.216 "data_offset": 0, 00:11:38.216 "data_size": 65536 00:11:38.216 }, 00:11:38.216 { 00:11:38.216 "name": "BaseBdev3", 00:11:38.216 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:38.216 "is_configured": true, 00:11:38.216 "data_offset": 0, 00:11:38.216 "data_size": 65536 00:11:38.216 }, 00:11:38.216 { 00:11:38.216 "name": "BaseBdev4", 00:11:38.216 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:38.216 "is_configured": true, 00:11:38.216 "data_offset": 0, 00:11:38.216 "data_size": 65536 00:11:38.216 } 00:11:38.216 ] 00:11:38.216 }' 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.216 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.476 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.476 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.476 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.476 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.736 [2024-09-29 21:42:57.497217] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.736 "name": "Existed_Raid", 00:11:38.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.736 "strip_size_kb": 0, 00:11:38.736 "state": "configuring", 00:11:38.736 "raid_level": "raid1", 00:11:38.736 "superblock": false, 00:11:38.736 "num_base_bdevs": 4, 00:11:38.736 "num_base_bdevs_discovered": 2, 00:11:38.736 "num_base_bdevs_operational": 4, 00:11:38.736 "base_bdevs_list": [ 00:11:38.736 { 00:11:38.736 "name": "BaseBdev1", 00:11:38.736 "uuid": "b8d29ab6-d196-4991-a88f-afedc2e9e4d6", 00:11:38.736 "is_configured": true, 00:11:38.736 "data_offset": 0, 00:11:38.736 "data_size": 65536 00:11:38.736 }, 00:11:38.736 { 00:11:38.736 "name": null, 00:11:38.736 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:38.736 "is_configured": false, 00:11:38.736 "data_offset": 0, 00:11:38.736 "data_size": 65536 00:11:38.736 }, 00:11:38.736 { 00:11:38.736 "name": null, 00:11:38.736 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:38.736 "is_configured": false, 00:11:38.736 "data_offset": 0, 00:11:38.736 "data_size": 65536 00:11:38.736 }, 00:11:38.736 { 00:11:38.736 "name": "BaseBdev4", 00:11:38.736 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:38.736 "is_configured": true, 00:11:38.736 "data_offset": 0, 00:11:38.736 "data_size": 65536 00:11:38.736 } 00:11:38.736 ] 00:11:38.736 }' 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.736 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.996 [2024-09-29 21:42:57.964434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.996 21:42:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.255 21:42:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.255 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.255 "name": "Existed_Raid", 00:11:39.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.255 "strip_size_kb": 0, 00:11:39.255 "state": "configuring", 00:11:39.255 "raid_level": "raid1", 00:11:39.256 "superblock": false, 00:11:39.256 "num_base_bdevs": 4, 00:11:39.256 "num_base_bdevs_discovered": 3, 00:11:39.256 "num_base_bdevs_operational": 4, 00:11:39.256 "base_bdevs_list": [ 00:11:39.256 { 00:11:39.256 "name": "BaseBdev1", 00:11:39.256 "uuid": "b8d29ab6-d196-4991-a88f-afedc2e9e4d6", 00:11:39.256 "is_configured": true, 00:11:39.256 "data_offset": 0, 00:11:39.256 "data_size": 65536 00:11:39.256 }, 00:11:39.256 { 00:11:39.256 "name": null, 00:11:39.256 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:39.256 "is_configured": false, 00:11:39.256 "data_offset": 0, 00:11:39.256 "data_size": 65536 00:11:39.256 }, 00:11:39.256 { 00:11:39.256 "name": "BaseBdev3", 00:11:39.256 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:39.256 "is_configured": true, 00:11:39.256 "data_offset": 0, 00:11:39.256 "data_size": 65536 00:11:39.256 }, 00:11:39.256 { 00:11:39.256 "name": "BaseBdev4", 00:11:39.256 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:39.256 "is_configured": true, 00:11:39.256 "data_offset": 0, 00:11:39.256 "data_size": 65536 00:11:39.256 } 00:11:39.256 ] 00:11:39.256 }' 00:11:39.256 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.256 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.515 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.515 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.515 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.515 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.515 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.515 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:39.515 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:39.515 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.515 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.515 [2024-09-29 21:42:58.459601] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.775 "name": "Existed_Raid", 00:11:39.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.775 "strip_size_kb": 0, 00:11:39.775 "state": "configuring", 00:11:39.775 "raid_level": "raid1", 00:11:39.775 "superblock": false, 00:11:39.775 "num_base_bdevs": 4, 00:11:39.775 "num_base_bdevs_discovered": 2, 00:11:39.775 "num_base_bdevs_operational": 4, 00:11:39.775 "base_bdevs_list": [ 00:11:39.775 { 00:11:39.775 "name": null, 00:11:39.775 "uuid": "b8d29ab6-d196-4991-a88f-afedc2e9e4d6", 00:11:39.775 "is_configured": false, 00:11:39.775 "data_offset": 0, 00:11:39.775 "data_size": 65536 00:11:39.775 }, 00:11:39.775 { 00:11:39.775 "name": null, 00:11:39.775 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:39.775 "is_configured": false, 00:11:39.775 "data_offset": 0, 00:11:39.775 "data_size": 65536 00:11:39.775 }, 00:11:39.775 { 00:11:39.775 "name": "BaseBdev3", 00:11:39.775 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:39.775 "is_configured": true, 00:11:39.775 "data_offset": 0, 00:11:39.775 "data_size": 65536 00:11:39.775 }, 00:11:39.775 { 00:11:39.775 "name": "BaseBdev4", 00:11:39.775 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:39.775 "is_configured": true, 00:11:39.775 "data_offset": 0, 00:11:39.775 "data_size": 65536 00:11:39.775 } 00:11:39.775 ] 00:11:39.775 }' 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.775 21:42:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.035 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.035 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.035 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:40.035 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.294 [2024-09-29 21:42:59.045676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.294 "name": "Existed_Raid", 00:11:40.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.294 "strip_size_kb": 0, 00:11:40.294 "state": "configuring", 00:11:40.294 "raid_level": "raid1", 00:11:40.294 "superblock": false, 00:11:40.294 "num_base_bdevs": 4, 00:11:40.294 "num_base_bdevs_discovered": 3, 00:11:40.294 "num_base_bdevs_operational": 4, 00:11:40.294 "base_bdevs_list": [ 00:11:40.294 { 00:11:40.294 "name": null, 00:11:40.294 "uuid": "b8d29ab6-d196-4991-a88f-afedc2e9e4d6", 00:11:40.294 "is_configured": false, 00:11:40.294 "data_offset": 0, 00:11:40.294 "data_size": 65536 00:11:40.294 }, 00:11:40.294 { 00:11:40.294 "name": "BaseBdev2", 00:11:40.294 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:40.294 "is_configured": true, 00:11:40.294 "data_offset": 0, 00:11:40.294 "data_size": 65536 00:11:40.294 }, 00:11:40.294 { 00:11:40.294 "name": "BaseBdev3", 00:11:40.294 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:40.294 "is_configured": true, 00:11:40.294 "data_offset": 0, 00:11:40.294 "data_size": 65536 00:11:40.294 }, 00:11:40.294 { 00:11:40.294 "name": "BaseBdev4", 00:11:40.294 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:40.294 "is_configured": true, 00:11:40.294 "data_offset": 0, 00:11:40.294 "data_size": 65536 00:11:40.294 } 00:11:40.294 ] 00:11:40.294 }' 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.294 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b8d29ab6-d196-4991-a88f-afedc2e9e4d6 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.554 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.814 [2024-09-29 21:42:59.570406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:40.814 [2024-09-29 21:42:59.570519] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.814 [2024-09-29 21:42:59.570546] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:40.814 [2024-09-29 21:42:59.570893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:40.814 [2024-09-29 21:42:59.571129] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.814 [2024-09-29 21:42:59.571174] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:40.814 [2024-09-29 21:42:59.571468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.814 NewBaseBdev 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.814 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.814 [ 00:11:40.814 { 00:11:40.814 "name": "NewBaseBdev", 00:11:40.814 "aliases": [ 00:11:40.814 "b8d29ab6-d196-4991-a88f-afedc2e9e4d6" 00:11:40.814 ], 00:11:40.814 "product_name": "Malloc disk", 00:11:40.814 "block_size": 512, 00:11:40.814 "num_blocks": 65536, 00:11:40.814 "uuid": "b8d29ab6-d196-4991-a88f-afedc2e9e4d6", 00:11:40.814 "assigned_rate_limits": { 00:11:40.814 "rw_ios_per_sec": 0, 00:11:40.814 "rw_mbytes_per_sec": 0, 00:11:40.814 "r_mbytes_per_sec": 0, 00:11:40.814 "w_mbytes_per_sec": 0 00:11:40.814 }, 00:11:40.814 "claimed": true, 00:11:40.814 "claim_type": "exclusive_write", 00:11:40.814 "zoned": false, 00:11:40.814 "supported_io_types": { 00:11:40.814 "read": true, 00:11:40.814 "write": true, 00:11:40.814 "unmap": true, 00:11:40.814 "flush": true, 00:11:40.814 "reset": true, 00:11:40.814 "nvme_admin": false, 00:11:40.814 "nvme_io": false, 00:11:40.814 "nvme_io_md": false, 00:11:40.815 "write_zeroes": true, 00:11:40.815 "zcopy": true, 00:11:40.815 "get_zone_info": false, 00:11:40.815 "zone_management": false, 00:11:40.815 "zone_append": false, 00:11:40.815 "compare": false, 00:11:40.815 "compare_and_write": false, 00:11:40.815 "abort": true, 00:11:40.815 "seek_hole": false, 00:11:40.815 "seek_data": false, 00:11:40.815 "copy": true, 00:11:40.815 "nvme_iov_md": false 00:11:40.815 }, 00:11:40.815 "memory_domains": [ 00:11:40.815 { 00:11:40.815 "dma_device_id": "system", 00:11:40.815 "dma_device_type": 1 00:11:40.815 }, 00:11:40.815 { 00:11:40.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.815 "dma_device_type": 2 00:11:40.815 } 00:11:40.815 ], 00:11:40.815 "driver_specific": {} 00:11:40.815 } 00:11:40.815 ] 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.815 "name": "Existed_Raid", 00:11:40.815 "uuid": "91cd1d5b-1e85-492c-bcaa-6aa3f8cd4c6f", 00:11:40.815 "strip_size_kb": 0, 00:11:40.815 "state": "online", 00:11:40.815 "raid_level": "raid1", 00:11:40.815 "superblock": false, 00:11:40.815 "num_base_bdevs": 4, 00:11:40.815 "num_base_bdevs_discovered": 4, 00:11:40.815 "num_base_bdevs_operational": 4, 00:11:40.815 "base_bdevs_list": [ 00:11:40.815 { 00:11:40.815 "name": "NewBaseBdev", 00:11:40.815 "uuid": "b8d29ab6-d196-4991-a88f-afedc2e9e4d6", 00:11:40.815 "is_configured": true, 00:11:40.815 "data_offset": 0, 00:11:40.815 "data_size": 65536 00:11:40.815 }, 00:11:40.815 { 00:11:40.815 "name": "BaseBdev2", 00:11:40.815 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:40.815 "is_configured": true, 00:11:40.815 "data_offset": 0, 00:11:40.815 "data_size": 65536 00:11:40.815 }, 00:11:40.815 { 00:11:40.815 "name": "BaseBdev3", 00:11:40.815 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:40.815 "is_configured": true, 00:11:40.815 "data_offset": 0, 00:11:40.815 "data_size": 65536 00:11:40.815 }, 00:11:40.815 { 00:11:40.815 "name": "BaseBdev4", 00:11:40.815 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:40.815 "is_configured": true, 00:11:40.815 "data_offset": 0, 00:11:40.815 "data_size": 65536 00:11:40.815 } 00:11:40.815 ] 00:11:40.815 }' 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.815 21:42:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.075 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.075 [2024-09-29 21:43:00.045989] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.334 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.335 "name": "Existed_Raid", 00:11:41.335 "aliases": [ 00:11:41.335 "91cd1d5b-1e85-492c-bcaa-6aa3f8cd4c6f" 00:11:41.335 ], 00:11:41.335 "product_name": "Raid Volume", 00:11:41.335 "block_size": 512, 00:11:41.335 "num_blocks": 65536, 00:11:41.335 "uuid": "91cd1d5b-1e85-492c-bcaa-6aa3f8cd4c6f", 00:11:41.335 "assigned_rate_limits": { 00:11:41.335 "rw_ios_per_sec": 0, 00:11:41.335 "rw_mbytes_per_sec": 0, 00:11:41.335 "r_mbytes_per_sec": 0, 00:11:41.335 "w_mbytes_per_sec": 0 00:11:41.335 }, 00:11:41.335 "claimed": false, 00:11:41.335 "zoned": false, 00:11:41.335 "supported_io_types": { 00:11:41.335 "read": true, 00:11:41.335 "write": true, 00:11:41.335 "unmap": false, 00:11:41.335 "flush": false, 00:11:41.335 "reset": true, 00:11:41.335 "nvme_admin": false, 00:11:41.335 "nvme_io": false, 00:11:41.335 "nvme_io_md": false, 00:11:41.335 "write_zeroes": true, 00:11:41.335 "zcopy": false, 00:11:41.335 "get_zone_info": false, 00:11:41.335 "zone_management": false, 00:11:41.335 "zone_append": false, 00:11:41.335 "compare": false, 00:11:41.335 "compare_and_write": false, 00:11:41.335 "abort": false, 00:11:41.335 "seek_hole": false, 00:11:41.335 "seek_data": false, 00:11:41.335 "copy": false, 00:11:41.335 "nvme_iov_md": false 00:11:41.335 }, 00:11:41.335 "memory_domains": [ 00:11:41.335 { 00:11:41.335 "dma_device_id": "system", 00:11:41.335 "dma_device_type": 1 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.335 "dma_device_type": 2 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "dma_device_id": "system", 00:11:41.335 "dma_device_type": 1 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.335 "dma_device_type": 2 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "dma_device_id": "system", 00:11:41.335 "dma_device_type": 1 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.335 "dma_device_type": 2 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "dma_device_id": "system", 00:11:41.335 "dma_device_type": 1 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.335 "dma_device_type": 2 00:11:41.335 } 00:11:41.335 ], 00:11:41.335 "driver_specific": { 00:11:41.335 "raid": { 00:11:41.335 "uuid": "91cd1d5b-1e85-492c-bcaa-6aa3f8cd4c6f", 00:11:41.335 "strip_size_kb": 0, 00:11:41.335 "state": "online", 00:11:41.335 "raid_level": "raid1", 00:11:41.335 "superblock": false, 00:11:41.335 "num_base_bdevs": 4, 00:11:41.335 "num_base_bdevs_discovered": 4, 00:11:41.335 "num_base_bdevs_operational": 4, 00:11:41.335 "base_bdevs_list": [ 00:11:41.335 { 00:11:41.335 "name": "NewBaseBdev", 00:11:41.335 "uuid": "b8d29ab6-d196-4991-a88f-afedc2e9e4d6", 00:11:41.335 "is_configured": true, 00:11:41.335 "data_offset": 0, 00:11:41.335 "data_size": 65536 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "name": "BaseBdev2", 00:11:41.335 "uuid": "c8384b6e-c456-490e-b3de-2de20ce11410", 00:11:41.335 "is_configured": true, 00:11:41.335 "data_offset": 0, 00:11:41.335 "data_size": 65536 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "name": "BaseBdev3", 00:11:41.335 "uuid": "d6f3f603-3f2c-4baa-9f94-b31d3a583c89", 00:11:41.335 "is_configured": true, 00:11:41.335 "data_offset": 0, 00:11:41.335 "data_size": 65536 00:11:41.335 }, 00:11:41.335 { 00:11:41.335 "name": "BaseBdev4", 00:11:41.335 "uuid": "9ee6e40c-93c0-4ef2-a898-1090978dfc59", 00:11:41.335 "is_configured": true, 00:11:41.335 "data_offset": 0, 00:11:41.335 "data_size": 65536 00:11:41.335 } 00:11:41.335 ] 00:11:41.335 } 00:11:41.335 } 00:11:41.335 }' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:41.335 BaseBdev2 00:11:41.335 BaseBdev3 00:11:41.335 BaseBdev4' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.335 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.595 [2024-09-29 21:43:00.385088] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.595 [2024-09-29 21:43:00.385115] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.595 [2024-09-29 21:43:00.385195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.595 [2024-09-29 21:43:00.385510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.595 [2024-09-29 21:43:00.385524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73266 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73266 ']' 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73266 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73266 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.595 killing process with pid 73266 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73266' 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73266 00:11:41.595 [2024-09-29 21:43:00.419922] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.595 21:43:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73266 00:11:41.855 [2024-09-29 21:43:00.829619] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.236 21:43:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:43.236 00:11:43.236 real 0m11.623s 00:11:43.236 user 0m18.037s 00:11:43.236 sys 0m2.242s 00:11:43.236 ************************************ 00:11:43.236 END TEST raid_state_function_test 00:11:43.236 ************************************ 00:11:43.236 21:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.236 21:43:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.544 21:43:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:43.544 21:43:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:43.544 21:43:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.544 21:43:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.544 ************************************ 00:11:43.544 START TEST raid_state_function_test_sb 00:11:43.544 ************************************ 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73937 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:43.544 Process raid pid: 73937 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73937' 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73937 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73937 ']' 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.544 21:43:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.544 [2024-09-29 21:43:02.339885] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:43.544 [2024-09-29 21:43:02.340064] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.544 [2024-09-29 21:43:02.491206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.804 [2024-09-29 21:43:02.734457] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.063 [2024-09-29 21:43:02.963377] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.063 [2024-09-29 21:43:02.963515] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.323 [2024-09-29 21:43:03.156333] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.323 [2024-09-29 21:43:03.156458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.323 [2024-09-29 21:43:03.156474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.323 [2024-09-29 21:43:03.156485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.323 [2024-09-29 21:43:03.156491] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.323 [2024-09-29 21:43:03.156502] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.323 [2024-09-29 21:43:03.156508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:44.323 [2024-09-29 21:43:03.156518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.323 "name": "Existed_Raid", 00:11:44.323 "uuid": "d8c284ba-a9cb-43f4-ae56-ff158dd49263", 00:11:44.323 "strip_size_kb": 0, 00:11:44.323 "state": "configuring", 00:11:44.323 "raid_level": "raid1", 00:11:44.323 "superblock": true, 00:11:44.323 "num_base_bdevs": 4, 00:11:44.323 "num_base_bdevs_discovered": 0, 00:11:44.323 "num_base_bdevs_operational": 4, 00:11:44.323 "base_bdevs_list": [ 00:11:44.323 { 00:11:44.323 "name": "BaseBdev1", 00:11:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.323 "is_configured": false, 00:11:44.323 "data_offset": 0, 00:11:44.323 "data_size": 0 00:11:44.323 }, 00:11:44.323 { 00:11:44.323 "name": "BaseBdev2", 00:11:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.323 "is_configured": false, 00:11:44.323 "data_offset": 0, 00:11:44.323 "data_size": 0 00:11:44.323 }, 00:11:44.323 { 00:11:44.323 "name": "BaseBdev3", 00:11:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.323 "is_configured": false, 00:11:44.323 "data_offset": 0, 00:11:44.323 "data_size": 0 00:11:44.323 }, 00:11:44.323 { 00:11:44.323 "name": "BaseBdev4", 00:11:44.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.323 "is_configured": false, 00:11:44.323 "data_offset": 0, 00:11:44.323 "data_size": 0 00:11:44.323 } 00:11:44.323 ] 00:11:44.323 }' 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.323 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.893 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.893 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.893 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.893 [2024-09-29 21:43:03.623453] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:44.893 [2024-09-29 21:43:03.623553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:44.893 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.893 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.893 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.893 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.893 [2024-09-29 21:43:03.635463] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.893 [2024-09-29 21:43:03.635536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.893 [2024-09-29 21:43:03.635580] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:44.894 [2024-09-29 21:43:03.635603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:44.894 [2024-09-29 21:43:03.635620] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:44.894 [2024-09-29 21:43:03.635641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:44.894 [2024-09-29 21:43:03.635658] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:44.894 [2024-09-29 21:43:03.635678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.894 [2024-09-29 21:43:03.723177] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.894 BaseBdev1 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.894 [ 00:11:44.894 { 00:11:44.894 "name": "BaseBdev1", 00:11:44.894 "aliases": [ 00:11:44.894 "1eb00ca1-c6f9-417f-9de4-dce7a7576010" 00:11:44.894 ], 00:11:44.894 "product_name": "Malloc disk", 00:11:44.894 "block_size": 512, 00:11:44.894 "num_blocks": 65536, 00:11:44.894 "uuid": "1eb00ca1-c6f9-417f-9de4-dce7a7576010", 00:11:44.894 "assigned_rate_limits": { 00:11:44.894 "rw_ios_per_sec": 0, 00:11:44.894 "rw_mbytes_per_sec": 0, 00:11:44.894 "r_mbytes_per_sec": 0, 00:11:44.894 "w_mbytes_per_sec": 0 00:11:44.894 }, 00:11:44.894 "claimed": true, 00:11:44.894 "claim_type": "exclusive_write", 00:11:44.894 "zoned": false, 00:11:44.894 "supported_io_types": { 00:11:44.894 "read": true, 00:11:44.894 "write": true, 00:11:44.894 "unmap": true, 00:11:44.894 "flush": true, 00:11:44.894 "reset": true, 00:11:44.894 "nvme_admin": false, 00:11:44.894 "nvme_io": false, 00:11:44.894 "nvme_io_md": false, 00:11:44.894 "write_zeroes": true, 00:11:44.894 "zcopy": true, 00:11:44.894 "get_zone_info": false, 00:11:44.894 "zone_management": false, 00:11:44.894 "zone_append": false, 00:11:44.894 "compare": false, 00:11:44.894 "compare_and_write": false, 00:11:44.894 "abort": true, 00:11:44.894 "seek_hole": false, 00:11:44.894 "seek_data": false, 00:11:44.894 "copy": true, 00:11:44.894 "nvme_iov_md": false 00:11:44.894 }, 00:11:44.894 "memory_domains": [ 00:11:44.894 { 00:11:44.894 "dma_device_id": "system", 00:11:44.894 "dma_device_type": 1 00:11:44.894 }, 00:11:44.894 { 00:11:44.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.894 "dma_device_type": 2 00:11:44.894 } 00:11:44.894 ], 00:11:44.894 "driver_specific": {} 00:11:44.894 } 00:11:44.894 ] 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.894 "name": "Existed_Raid", 00:11:44.894 "uuid": "80d37662-3f02-4fbf-bf19-8f972603c984", 00:11:44.894 "strip_size_kb": 0, 00:11:44.894 "state": "configuring", 00:11:44.894 "raid_level": "raid1", 00:11:44.894 "superblock": true, 00:11:44.894 "num_base_bdevs": 4, 00:11:44.894 "num_base_bdevs_discovered": 1, 00:11:44.894 "num_base_bdevs_operational": 4, 00:11:44.894 "base_bdevs_list": [ 00:11:44.894 { 00:11:44.894 "name": "BaseBdev1", 00:11:44.894 "uuid": "1eb00ca1-c6f9-417f-9de4-dce7a7576010", 00:11:44.894 "is_configured": true, 00:11:44.894 "data_offset": 2048, 00:11:44.894 "data_size": 63488 00:11:44.894 }, 00:11:44.894 { 00:11:44.894 "name": "BaseBdev2", 00:11:44.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.894 "is_configured": false, 00:11:44.894 "data_offset": 0, 00:11:44.894 "data_size": 0 00:11:44.894 }, 00:11:44.894 { 00:11:44.894 "name": "BaseBdev3", 00:11:44.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.894 "is_configured": false, 00:11:44.894 "data_offset": 0, 00:11:44.894 "data_size": 0 00:11:44.894 }, 00:11:44.894 { 00:11:44.894 "name": "BaseBdev4", 00:11:44.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.894 "is_configured": false, 00:11:44.894 "data_offset": 0, 00:11:44.894 "data_size": 0 00:11:44.894 } 00:11:44.894 ] 00:11:44.894 }' 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.894 21:43:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.463 [2024-09-29 21:43:04.198415] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.463 [2024-09-29 21:43:04.198465] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.463 [2024-09-29 21:43:04.210453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.463 [2024-09-29 21:43:04.212589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:45.463 [2024-09-29 21:43:04.212634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:45.463 [2024-09-29 21:43:04.212644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:45.463 [2024-09-29 21:43:04.212655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:45.463 [2024-09-29 21:43:04.212661] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:45.463 [2024-09-29 21:43:04.212670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.463 "name": "Existed_Raid", 00:11:45.463 "uuid": "09ae09fb-6043-4e91-95cf-a8bbd204e909", 00:11:45.463 "strip_size_kb": 0, 00:11:45.463 "state": "configuring", 00:11:45.463 "raid_level": "raid1", 00:11:45.463 "superblock": true, 00:11:45.463 "num_base_bdevs": 4, 00:11:45.463 "num_base_bdevs_discovered": 1, 00:11:45.463 "num_base_bdevs_operational": 4, 00:11:45.463 "base_bdevs_list": [ 00:11:45.463 { 00:11:45.463 "name": "BaseBdev1", 00:11:45.463 "uuid": "1eb00ca1-c6f9-417f-9de4-dce7a7576010", 00:11:45.463 "is_configured": true, 00:11:45.463 "data_offset": 2048, 00:11:45.463 "data_size": 63488 00:11:45.463 }, 00:11:45.463 { 00:11:45.463 "name": "BaseBdev2", 00:11:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.463 "is_configured": false, 00:11:45.463 "data_offset": 0, 00:11:45.463 "data_size": 0 00:11:45.463 }, 00:11:45.463 { 00:11:45.463 "name": "BaseBdev3", 00:11:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.463 "is_configured": false, 00:11:45.463 "data_offset": 0, 00:11:45.463 "data_size": 0 00:11:45.463 }, 00:11:45.463 { 00:11:45.463 "name": "BaseBdev4", 00:11:45.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.463 "is_configured": false, 00:11:45.463 "data_offset": 0, 00:11:45.463 "data_size": 0 00:11:45.463 } 00:11:45.463 ] 00:11:45.463 }' 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.463 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.723 [2024-09-29 21:43:04.697026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.723 BaseBdev2 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.723 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.983 [ 00:11:45.983 { 00:11:45.983 "name": "BaseBdev2", 00:11:45.983 "aliases": [ 00:11:45.983 "1d09f6f9-7395-4d2a-af38-c8429aceac99" 00:11:45.983 ], 00:11:45.983 "product_name": "Malloc disk", 00:11:45.983 "block_size": 512, 00:11:45.983 "num_blocks": 65536, 00:11:45.983 "uuid": "1d09f6f9-7395-4d2a-af38-c8429aceac99", 00:11:45.983 "assigned_rate_limits": { 00:11:45.983 "rw_ios_per_sec": 0, 00:11:45.983 "rw_mbytes_per_sec": 0, 00:11:45.983 "r_mbytes_per_sec": 0, 00:11:45.983 "w_mbytes_per_sec": 0 00:11:45.983 }, 00:11:45.983 "claimed": true, 00:11:45.983 "claim_type": "exclusive_write", 00:11:45.983 "zoned": false, 00:11:45.983 "supported_io_types": { 00:11:45.983 "read": true, 00:11:45.983 "write": true, 00:11:45.983 "unmap": true, 00:11:45.983 "flush": true, 00:11:45.983 "reset": true, 00:11:45.983 "nvme_admin": false, 00:11:45.983 "nvme_io": false, 00:11:45.983 "nvme_io_md": false, 00:11:45.983 "write_zeroes": true, 00:11:45.983 "zcopy": true, 00:11:45.983 "get_zone_info": false, 00:11:45.983 "zone_management": false, 00:11:45.983 "zone_append": false, 00:11:45.983 "compare": false, 00:11:45.983 "compare_and_write": false, 00:11:45.983 "abort": true, 00:11:45.983 "seek_hole": false, 00:11:45.983 "seek_data": false, 00:11:45.983 "copy": true, 00:11:45.983 "nvme_iov_md": false 00:11:45.983 }, 00:11:45.983 "memory_domains": [ 00:11:45.983 { 00:11:45.983 "dma_device_id": "system", 00:11:45.983 "dma_device_type": 1 00:11:45.983 }, 00:11:45.983 { 00:11:45.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.983 "dma_device_type": 2 00:11:45.983 } 00:11:45.983 ], 00:11:45.983 "driver_specific": {} 00:11:45.983 } 00:11:45.983 ] 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.983 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.984 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.984 "name": "Existed_Raid", 00:11:45.984 "uuid": "09ae09fb-6043-4e91-95cf-a8bbd204e909", 00:11:45.984 "strip_size_kb": 0, 00:11:45.984 "state": "configuring", 00:11:45.984 "raid_level": "raid1", 00:11:45.984 "superblock": true, 00:11:45.984 "num_base_bdevs": 4, 00:11:45.984 "num_base_bdevs_discovered": 2, 00:11:45.984 "num_base_bdevs_operational": 4, 00:11:45.984 "base_bdevs_list": [ 00:11:45.984 { 00:11:45.984 "name": "BaseBdev1", 00:11:45.984 "uuid": "1eb00ca1-c6f9-417f-9de4-dce7a7576010", 00:11:45.984 "is_configured": true, 00:11:45.984 "data_offset": 2048, 00:11:45.984 "data_size": 63488 00:11:45.984 }, 00:11:45.984 { 00:11:45.984 "name": "BaseBdev2", 00:11:45.984 "uuid": "1d09f6f9-7395-4d2a-af38-c8429aceac99", 00:11:45.984 "is_configured": true, 00:11:45.984 "data_offset": 2048, 00:11:45.984 "data_size": 63488 00:11:45.984 }, 00:11:45.984 { 00:11:45.984 "name": "BaseBdev3", 00:11:45.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.984 "is_configured": false, 00:11:45.984 "data_offset": 0, 00:11:45.984 "data_size": 0 00:11:45.984 }, 00:11:45.984 { 00:11:45.984 "name": "BaseBdev4", 00:11:45.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.984 "is_configured": false, 00:11:45.984 "data_offset": 0, 00:11:45.984 "data_size": 0 00:11:45.984 } 00:11:45.984 ] 00:11:45.984 }' 00:11:45.984 21:43:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.984 21:43:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.244 [2024-09-29 21:43:05.170404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.244 BaseBdev3 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.244 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.244 [ 00:11:46.244 { 00:11:46.244 "name": "BaseBdev3", 00:11:46.244 "aliases": [ 00:11:46.244 "cf565203-990d-480e-a7f0-e511d8e4137e" 00:11:46.244 ], 00:11:46.244 "product_name": "Malloc disk", 00:11:46.244 "block_size": 512, 00:11:46.244 "num_blocks": 65536, 00:11:46.244 "uuid": "cf565203-990d-480e-a7f0-e511d8e4137e", 00:11:46.244 "assigned_rate_limits": { 00:11:46.244 "rw_ios_per_sec": 0, 00:11:46.244 "rw_mbytes_per_sec": 0, 00:11:46.244 "r_mbytes_per_sec": 0, 00:11:46.244 "w_mbytes_per_sec": 0 00:11:46.244 }, 00:11:46.244 "claimed": true, 00:11:46.244 "claim_type": "exclusive_write", 00:11:46.244 "zoned": false, 00:11:46.244 "supported_io_types": { 00:11:46.244 "read": true, 00:11:46.244 "write": true, 00:11:46.244 "unmap": true, 00:11:46.244 "flush": true, 00:11:46.244 "reset": true, 00:11:46.244 "nvme_admin": false, 00:11:46.244 "nvme_io": false, 00:11:46.244 "nvme_io_md": false, 00:11:46.244 "write_zeroes": true, 00:11:46.244 "zcopy": true, 00:11:46.244 "get_zone_info": false, 00:11:46.244 "zone_management": false, 00:11:46.244 "zone_append": false, 00:11:46.244 "compare": false, 00:11:46.244 "compare_and_write": false, 00:11:46.244 "abort": true, 00:11:46.244 "seek_hole": false, 00:11:46.244 "seek_data": false, 00:11:46.244 "copy": true, 00:11:46.244 "nvme_iov_md": false 00:11:46.244 }, 00:11:46.244 "memory_domains": [ 00:11:46.244 { 00:11:46.244 "dma_device_id": "system", 00:11:46.244 "dma_device_type": 1 00:11:46.244 }, 00:11:46.244 { 00:11:46.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.244 "dma_device_type": 2 00:11:46.245 } 00:11:46.245 ], 00:11:46.245 "driver_specific": {} 00:11:46.245 } 00:11:46.245 ] 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.245 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.504 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.504 "name": "Existed_Raid", 00:11:46.504 "uuid": "09ae09fb-6043-4e91-95cf-a8bbd204e909", 00:11:46.504 "strip_size_kb": 0, 00:11:46.504 "state": "configuring", 00:11:46.504 "raid_level": "raid1", 00:11:46.504 "superblock": true, 00:11:46.504 "num_base_bdevs": 4, 00:11:46.504 "num_base_bdevs_discovered": 3, 00:11:46.504 "num_base_bdevs_operational": 4, 00:11:46.504 "base_bdevs_list": [ 00:11:46.504 { 00:11:46.504 "name": "BaseBdev1", 00:11:46.504 "uuid": "1eb00ca1-c6f9-417f-9de4-dce7a7576010", 00:11:46.504 "is_configured": true, 00:11:46.504 "data_offset": 2048, 00:11:46.504 "data_size": 63488 00:11:46.504 }, 00:11:46.504 { 00:11:46.504 "name": "BaseBdev2", 00:11:46.504 "uuid": "1d09f6f9-7395-4d2a-af38-c8429aceac99", 00:11:46.504 "is_configured": true, 00:11:46.504 "data_offset": 2048, 00:11:46.504 "data_size": 63488 00:11:46.504 }, 00:11:46.504 { 00:11:46.504 "name": "BaseBdev3", 00:11:46.504 "uuid": "cf565203-990d-480e-a7f0-e511d8e4137e", 00:11:46.504 "is_configured": true, 00:11:46.504 "data_offset": 2048, 00:11:46.504 "data_size": 63488 00:11:46.504 }, 00:11:46.504 { 00:11:46.504 "name": "BaseBdev4", 00:11:46.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.504 "is_configured": false, 00:11:46.504 "data_offset": 0, 00:11:46.504 "data_size": 0 00:11:46.504 } 00:11:46.504 ] 00:11:46.504 }' 00:11:46.504 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.504 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.763 [2024-09-29 21:43:05.663816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.763 [2024-09-29 21:43:05.664266] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.763 [2024-09-29 21:43:05.664326] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.763 [2024-09-29 21:43:05.664653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:46.763 [2024-09-29 21:43:05.664866] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.763 [2024-09-29 21:43:05.664915] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:46.763 BaseBdev4 00:11:46.763 [2024-09-29 21:43:05.665124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.763 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.763 [ 00:11:46.763 { 00:11:46.764 "name": "BaseBdev4", 00:11:46.764 "aliases": [ 00:11:46.764 "0f797ea4-928e-4d76-923e-46245c77aec2" 00:11:46.764 ], 00:11:46.764 "product_name": "Malloc disk", 00:11:46.764 "block_size": 512, 00:11:46.764 "num_blocks": 65536, 00:11:46.764 "uuid": "0f797ea4-928e-4d76-923e-46245c77aec2", 00:11:46.764 "assigned_rate_limits": { 00:11:46.764 "rw_ios_per_sec": 0, 00:11:46.764 "rw_mbytes_per_sec": 0, 00:11:46.764 "r_mbytes_per_sec": 0, 00:11:46.764 "w_mbytes_per_sec": 0 00:11:46.764 }, 00:11:46.764 "claimed": true, 00:11:46.764 "claim_type": "exclusive_write", 00:11:46.764 "zoned": false, 00:11:46.764 "supported_io_types": { 00:11:46.764 "read": true, 00:11:46.764 "write": true, 00:11:46.764 "unmap": true, 00:11:46.764 "flush": true, 00:11:46.764 "reset": true, 00:11:46.764 "nvme_admin": false, 00:11:46.764 "nvme_io": false, 00:11:46.764 "nvme_io_md": false, 00:11:46.764 "write_zeroes": true, 00:11:46.764 "zcopy": true, 00:11:46.764 "get_zone_info": false, 00:11:46.764 "zone_management": false, 00:11:46.764 "zone_append": false, 00:11:46.764 "compare": false, 00:11:46.764 "compare_and_write": false, 00:11:46.764 "abort": true, 00:11:46.764 "seek_hole": false, 00:11:46.764 "seek_data": false, 00:11:46.764 "copy": true, 00:11:46.764 "nvme_iov_md": false 00:11:46.764 }, 00:11:46.764 "memory_domains": [ 00:11:46.764 { 00:11:46.764 "dma_device_id": "system", 00:11:46.764 "dma_device_type": 1 00:11:46.764 }, 00:11:46.764 { 00:11:46.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.764 "dma_device_type": 2 00:11:46.764 } 00:11:46.764 ], 00:11:46.764 "driver_specific": {} 00:11:46.764 } 00:11:46.764 ] 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.764 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.023 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.023 "name": "Existed_Raid", 00:11:47.023 "uuid": "09ae09fb-6043-4e91-95cf-a8bbd204e909", 00:11:47.023 "strip_size_kb": 0, 00:11:47.023 "state": "online", 00:11:47.023 "raid_level": "raid1", 00:11:47.023 "superblock": true, 00:11:47.023 "num_base_bdevs": 4, 00:11:47.023 "num_base_bdevs_discovered": 4, 00:11:47.023 "num_base_bdevs_operational": 4, 00:11:47.023 "base_bdevs_list": [ 00:11:47.023 { 00:11:47.023 "name": "BaseBdev1", 00:11:47.023 "uuid": "1eb00ca1-c6f9-417f-9de4-dce7a7576010", 00:11:47.023 "is_configured": true, 00:11:47.023 "data_offset": 2048, 00:11:47.023 "data_size": 63488 00:11:47.023 }, 00:11:47.023 { 00:11:47.023 "name": "BaseBdev2", 00:11:47.023 "uuid": "1d09f6f9-7395-4d2a-af38-c8429aceac99", 00:11:47.023 "is_configured": true, 00:11:47.023 "data_offset": 2048, 00:11:47.023 "data_size": 63488 00:11:47.023 }, 00:11:47.023 { 00:11:47.023 "name": "BaseBdev3", 00:11:47.023 "uuid": "cf565203-990d-480e-a7f0-e511d8e4137e", 00:11:47.023 "is_configured": true, 00:11:47.023 "data_offset": 2048, 00:11:47.023 "data_size": 63488 00:11:47.023 }, 00:11:47.023 { 00:11:47.023 "name": "BaseBdev4", 00:11:47.023 "uuid": "0f797ea4-928e-4d76-923e-46245c77aec2", 00:11:47.023 "is_configured": true, 00:11:47.023 "data_offset": 2048, 00:11:47.023 "data_size": 63488 00:11:47.023 } 00:11:47.023 ] 00:11:47.023 }' 00:11:47.023 21:43:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.023 21:43:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.283 [2024-09-29 21:43:06.131341] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.283 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:47.283 "name": "Existed_Raid", 00:11:47.283 "aliases": [ 00:11:47.283 "09ae09fb-6043-4e91-95cf-a8bbd204e909" 00:11:47.283 ], 00:11:47.283 "product_name": "Raid Volume", 00:11:47.283 "block_size": 512, 00:11:47.283 "num_blocks": 63488, 00:11:47.283 "uuid": "09ae09fb-6043-4e91-95cf-a8bbd204e909", 00:11:47.283 "assigned_rate_limits": { 00:11:47.284 "rw_ios_per_sec": 0, 00:11:47.284 "rw_mbytes_per_sec": 0, 00:11:47.284 "r_mbytes_per_sec": 0, 00:11:47.284 "w_mbytes_per_sec": 0 00:11:47.284 }, 00:11:47.284 "claimed": false, 00:11:47.284 "zoned": false, 00:11:47.284 "supported_io_types": { 00:11:47.284 "read": true, 00:11:47.284 "write": true, 00:11:47.284 "unmap": false, 00:11:47.284 "flush": false, 00:11:47.284 "reset": true, 00:11:47.284 "nvme_admin": false, 00:11:47.284 "nvme_io": false, 00:11:47.284 "nvme_io_md": false, 00:11:47.284 "write_zeroes": true, 00:11:47.284 "zcopy": false, 00:11:47.284 "get_zone_info": false, 00:11:47.284 "zone_management": false, 00:11:47.284 "zone_append": false, 00:11:47.284 "compare": false, 00:11:47.284 "compare_and_write": false, 00:11:47.284 "abort": false, 00:11:47.284 "seek_hole": false, 00:11:47.284 "seek_data": false, 00:11:47.284 "copy": false, 00:11:47.284 "nvme_iov_md": false 00:11:47.284 }, 00:11:47.284 "memory_domains": [ 00:11:47.284 { 00:11:47.284 "dma_device_id": "system", 00:11:47.284 "dma_device_type": 1 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.284 "dma_device_type": 2 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "dma_device_id": "system", 00:11:47.284 "dma_device_type": 1 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.284 "dma_device_type": 2 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "dma_device_id": "system", 00:11:47.284 "dma_device_type": 1 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.284 "dma_device_type": 2 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "dma_device_id": "system", 00:11:47.284 "dma_device_type": 1 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.284 "dma_device_type": 2 00:11:47.284 } 00:11:47.284 ], 00:11:47.284 "driver_specific": { 00:11:47.284 "raid": { 00:11:47.284 "uuid": "09ae09fb-6043-4e91-95cf-a8bbd204e909", 00:11:47.284 "strip_size_kb": 0, 00:11:47.284 "state": "online", 00:11:47.284 "raid_level": "raid1", 00:11:47.284 "superblock": true, 00:11:47.284 "num_base_bdevs": 4, 00:11:47.284 "num_base_bdevs_discovered": 4, 00:11:47.284 "num_base_bdevs_operational": 4, 00:11:47.284 "base_bdevs_list": [ 00:11:47.284 { 00:11:47.284 "name": "BaseBdev1", 00:11:47.284 "uuid": "1eb00ca1-c6f9-417f-9de4-dce7a7576010", 00:11:47.284 "is_configured": true, 00:11:47.284 "data_offset": 2048, 00:11:47.284 "data_size": 63488 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "name": "BaseBdev2", 00:11:47.284 "uuid": "1d09f6f9-7395-4d2a-af38-c8429aceac99", 00:11:47.284 "is_configured": true, 00:11:47.284 "data_offset": 2048, 00:11:47.284 "data_size": 63488 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "name": "BaseBdev3", 00:11:47.284 "uuid": "cf565203-990d-480e-a7f0-e511d8e4137e", 00:11:47.284 "is_configured": true, 00:11:47.284 "data_offset": 2048, 00:11:47.284 "data_size": 63488 00:11:47.284 }, 00:11:47.284 { 00:11:47.284 "name": "BaseBdev4", 00:11:47.284 "uuid": "0f797ea4-928e-4d76-923e-46245c77aec2", 00:11:47.284 "is_configured": true, 00:11:47.284 "data_offset": 2048, 00:11:47.284 "data_size": 63488 00:11:47.284 } 00:11:47.284 ] 00:11:47.284 } 00:11:47.284 } 00:11:47.284 }' 00:11:47.284 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:47.284 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:47.284 BaseBdev2 00:11:47.284 BaseBdev3 00:11:47.284 BaseBdev4' 00:11:47.284 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.284 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:47.284 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.284 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:47.284 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.284 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.284 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.544 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.545 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.545 [2024-09-29 21:43:06.426576] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.804 "name": "Existed_Raid", 00:11:47.804 "uuid": "09ae09fb-6043-4e91-95cf-a8bbd204e909", 00:11:47.804 "strip_size_kb": 0, 00:11:47.804 "state": "online", 00:11:47.804 "raid_level": "raid1", 00:11:47.804 "superblock": true, 00:11:47.804 "num_base_bdevs": 4, 00:11:47.804 "num_base_bdevs_discovered": 3, 00:11:47.804 "num_base_bdevs_operational": 3, 00:11:47.804 "base_bdevs_list": [ 00:11:47.804 { 00:11:47.804 "name": null, 00:11:47.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.804 "is_configured": false, 00:11:47.804 "data_offset": 0, 00:11:47.804 "data_size": 63488 00:11:47.804 }, 00:11:47.804 { 00:11:47.804 "name": "BaseBdev2", 00:11:47.804 "uuid": "1d09f6f9-7395-4d2a-af38-c8429aceac99", 00:11:47.804 "is_configured": true, 00:11:47.804 "data_offset": 2048, 00:11:47.804 "data_size": 63488 00:11:47.804 }, 00:11:47.804 { 00:11:47.804 "name": "BaseBdev3", 00:11:47.804 "uuid": "cf565203-990d-480e-a7f0-e511d8e4137e", 00:11:47.804 "is_configured": true, 00:11:47.804 "data_offset": 2048, 00:11:47.804 "data_size": 63488 00:11:47.804 }, 00:11:47.804 { 00:11:47.804 "name": "BaseBdev4", 00:11:47.804 "uuid": "0f797ea4-928e-4d76-923e-46245c77aec2", 00:11:47.804 "is_configured": true, 00:11:47.804 "data_offset": 2048, 00:11:47.804 "data_size": 63488 00:11:47.804 } 00:11:47.804 ] 00:11:47.804 }' 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.804 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.064 21:43:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.064 [2024-09-29 21:43:06.976531] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.324 [2024-09-29 21:43:07.136557] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.324 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.324 [2024-09-29 21:43:07.277386] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:48.324 [2024-09-29 21:43:07.277555] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.584 [2024-09-29 21:43:07.380360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.584 [2024-09-29 21:43:07.380519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.584 [2024-09-29 21:43:07.380563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.584 BaseBdev2 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.584 [ 00:11:48.584 { 00:11:48.584 "name": "BaseBdev2", 00:11:48.584 "aliases": [ 00:11:48.584 "9e266e25-54c8-4026-942f-3defa034e45e" 00:11:48.584 ], 00:11:48.584 "product_name": "Malloc disk", 00:11:48.584 "block_size": 512, 00:11:48.584 "num_blocks": 65536, 00:11:48.584 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:48.584 "assigned_rate_limits": { 00:11:48.584 "rw_ios_per_sec": 0, 00:11:48.584 "rw_mbytes_per_sec": 0, 00:11:48.584 "r_mbytes_per_sec": 0, 00:11:48.584 "w_mbytes_per_sec": 0 00:11:48.584 }, 00:11:48.584 "claimed": false, 00:11:48.584 "zoned": false, 00:11:48.584 "supported_io_types": { 00:11:48.584 "read": true, 00:11:48.584 "write": true, 00:11:48.584 "unmap": true, 00:11:48.584 "flush": true, 00:11:48.584 "reset": true, 00:11:48.584 "nvme_admin": false, 00:11:48.584 "nvme_io": false, 00:11:48.584 "nvme_io_md": false, 00:11:48.584 "write_zeroes": true, 00:11:48.584 "zcopy": true, 00:11:48.584 "get_zone_info": false, 00:11:48.584 "zone_management": false, 00:11:48.584 "zone_append": false, 00:11:48.584 "compare": false, 00:11:48.584 "compare_and_write": false, 00:11:48.584 "abort": true, 00:11:48.584 "seek_hole": false, 00:11:48.584 "seek_data": false, 00:11:48.584 "copy": true, 00:11:48.584 "nvme_iov_md": false 00:11:48.584 }, 00:11:48.584 "memory_domains": [ 00:11:48.584 { 00:11:48.584 "dma_device_id": "system", 00:11:48.584 "dma_device_type": 1 00:11:48.584 }, 00:11:48.584 { 00:11:48.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.584 "dma_device_type": 2 00:11:48.584 } 00:11:48.584 ], 00:11:48.584 "driver_specific": {} 00:11:48.584 } 00:11:48.584 ] 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.584 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.844 BaseBdev3 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.844 [ 00:11:48.844 { 00:11:48.844 "name": "BaseBdev3", 00:11:48.844 "aliases": [ 00:11:48.844 "5eee9973-0b15-43ec-97f9-1d95fe912df4" 00:11:48.844 ], 00:11:48.844 "product_name": "Malloc disk", 00:11:48.844 "block_size": 512, 00:11:48.844 "num_blocks": 65536, 00:11:48.844 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:48.844 "assigned_rate_limits": { 00:11:48.844 "rw_ios_per_sec": 0, 00:11:48.844 "rw_mbytes_per_sec": 0, 00:11:48.844 "r_mbytes_per_sec": 0, 00:11:48.844 "w_mbytes_per_sec": 0 00:11:48.844 }, 00:11:48.844 "claimed": false, 00:11:48.844 "zoned": false, 00:11:48.844 "supported_io_types": { 00:11:48.844 "read": true, 00:11:48.844 "write": true, 00:11:48.844 "unmap": true, 00:11:48.844 "flush": true, 00:11:48.844 "reset": true, 00:11:48.844 "nvme_admin": false, 00:11:48.844 "nvme_io": false, 00:11:48.844 "nvme_io_md": false, 00:11:48.844 "write_zeroes": true, 00:11:48.844 "zcopy": true, 00:11:48.844 "get_zone_info": false, 00:11:48.844 "zone_management": false, 00:11:48.844 "zone_append": false, 00:11:48.844 "compare": false, 00:11:48.844 "compare_and_write": false, 00:11:48.844 "abort": true, 00:11:48.844 "seek_hole": false, 00:11:48.844 "seek_data": false, 00:11:48.844 "copy": true, 00:11:48.844 "nvme_iov_md": false 00:11:48.844 }, 00:11:48.844 "memory_domains": [ 00:11:48.844 { 00:11:48.844 "dma_device_id": "system", 00:11:48.844 "dma_device_type": 1 00:11:48.844 }, 00:11:48.844 { 00:11:48.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.844 "dma_device_type": 2 00:11:48.844 } 00:11:48.844 ], 00:11:48.844 "driver_specific": {} 00:11:48.844 } 00:11:48.844 ] 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.844 BaseBdev4 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.844 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.845 [ 00:11:48.845 { 00:11:48.845 "name": "BaseBdev4", 00:11:48.845 "aliases": [ 00:11:48.845 "7754a250-947e-44c3-8037-195ce11ef498" 00:11:48.845 ], 00:11:48.845 "product_name": "Malloc disk", 00:11:48.845 "block_size": 512, 00:11:48.845 "num_blocks": 65536, 00:11:48.845 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:48.845 "assigned_rate_limits": { 00:11:48.845 "rw_ios_per_sec": 0, 00:11:48.845 "rw_mbytes_per_sec": 0, 00:11:48.845 "r_mbytes_per_sec": 0, 00:11:48.845 "w_mbytes_per_sec": 0 00:11:48.845 }, 00:11:48.845 "claimed": false, 00:11:48.845 "zoned": false, 00:11:48.845 "supported_io_types": { 00:11:48.845 "read": true, 00:11:48.845 "write": true, 00:11:48.845 "unmap": true, 00:11:48.845 "flush": true, 00:11:48.845 "reset": true, 00:11:48.845 "nvme_admin": false, 00:11:48.845 "nvme_io": false, 00:11:48.845 "nvme_io_md": false, 00:11:48.845 "write_zeroes": true, 00:11:48.845 "zcopy": true, 00:11:48.845 "get_zone_info": false, 00:11:48.845 "zone_management": false, 00:11:48.845 "zone_append": false, 00:11:48.845 "compare": false, 00:11:48.845 "compare_and_write": false, 00:11:48.845 "abort": true, 00:11:48.845 "seek_hole": false, 00:11:48.845 "seek_data": false, 00:11:48.845 "copy": true, 00:11:48.845 "nvme_iov_md": false 00:11:48.845 }, 00:11:48.845 "memory_domains": [ 00:11:48.845 { 00:11:48.845 "dma_device_id": "system", 00:11:48.845 "dma_device_type": 1 00:11:48.845 }, 00:11:48.845 { 00:11:48.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.845 "dma_device_type": 2 00:11:48.845 } 00:11:48.845 ], 00:11:48.845 "driver_specific": {} 00:11:48.845 } 00:11:48.845 ] 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.845 [2024-09-29 21:43:07.697970] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.845 [2024-09-29 21:43:07.698087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.845 [2024-09-29 21:43:07.698148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:48.845 [2024-09-29 21:43:07.700260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.845 [2024-09-29 21:43:07.700353] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.845 "name": "Existed_Raid", 00:11:48.845 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:48.845 "strip_size_kb": 0, 00:11:48.845 "state": "configuring", 00:11:48.845 "raid_level": "raid1", 00:11:48.845 "superblock": true, 00:11:48.845 "num_base_bdevs": 4, 00:11:48.845 "num_base_bdevs_discovered": 3, 00:11:48.845 "num_base_bdevs_operational": 4, 00:11:48.845 "base_bdevs_list": [ 00:11:48.845 { 00:11:48.845 "name": "BaseBdev1", 00:11:48.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.845 "is_configured": false, 00:11:48.845 "data_offset": 0, 00:11:48.845 "data_size": 0 00:11:48.845 }, 00:11:48.845 { 00:11:48.845 "name": "BaseBdev2", 00:11:48.845 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:48.845 "is_configured": true, 00:11:48.845 "data_offset": 2048, 00:11:48.845 "data_size": 63488 00:11:48.845 }, 00:11:48.845 { 00:11:48.845 "name": "BaseBdev3", 00:11:48.845 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:48.845 "is_configured": true, 00:11:48.845 "data_offset": 2048, 00:11:48.845 "data_size": 63488 00:11:48.845 }, 00:11:48.845 { 00:11:48.845 "name": "BaseBdev4", 00:11:48.845 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:48.845 "is_configured": true, 00:11:48.845 "data_offset": 2048, 00:11:48.845 "data_size": 63488 00:11:48.845 } 00:11:48.845 ] 00:11:48.845 }' 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.845 21:43:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.414 [2024-09-29 21:43:08.145192] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.414 "name": "Existed_Raid", 00:11:49.414 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:49.414 "strip_size_kb": 0, 00:11:49.414 "state": "configuring", 00:11:49.414 "raid_level": "raid1", 00:11:49.414 "superblock": true, 00:11:49.414 "num_base_bdevs": 4, 00:11:49.414 "num_base_bdevs_discovered": 2, 00:11:49.414 "num_base_bdevs_operational": 4, 00:11:49.414 "base_bdevs_list": [ 00:11:49.414 { 00:11:49.414 "name": "BaseBdev1", 00:11:49.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.414 "is_configured": false, 00:11:49.414 "data_offset": 0, 00:11:49.414 "data_size": 0 00:11:49.414 }, 00:11:49.414 { 00:11:49.414 "name": null, 00:11:49.414 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:49.414 "is_configured": false, 00:11:49.414 "data_offset": 0, 00:11:49.414 "data_size": 63488 00:11:49.414 }, 00:11:49.414 { 00:11:49.414 "name": "BaseBdev3", 00:11:49.414 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:49.414 "is_configured": true, 00:11:49.414 "data_offset": 2048, 00:11:49.414 "data_size": 63488 00:11:49.414 }, 00:11:49.414 { 00:11:49.414 "name": "BaseBdev4", 00:11:49.414 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:49.414 "is_configured": true, 00:11:49.414 "data_offset": 2048, 00:11:49.414 "data_size": 63488 00:11:49.414 } 00:11:49.414 ] 00:11:49.414 }' 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.414 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.673 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:49.673 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.673 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.674 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.674 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.933 [2024-09-29 21:43:08.702352] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.933 BaseBdev1 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.933 [ 00:11:49.933 { 00:11:49.933 "name": "BaseBdev1", 00:11:49.933 "aliases": [ 00:11:49.933 "497759e0-03e9-4995-ba54-1459237b05fc" 00:11:49.933 ], 00:11:49.933 "product_name": "Malloc disk", 00:11:49.933 "block_size": 512, 00:11:49.933 "num_blocks": 65536, 00:11:49.933 "uuid": "497759e0-03e9-4995-ba54-1459237b05fc", 00:11:49.933 "assigned_rate_limits": { 00:11:49.933 "rw_ios_per_sec": 0, 00:11:49.933 "rw_mbytes_per_sec": 0, 00:11:49.933 "r_mbytes_per_sec": 0, 00:11:49.933 "w_mbytes_per_sec": 0 00:11:49.933 }, 00:11:49.933 "claimed": true, 00:11:49.933 "claim_type": "exclusive_write", 00:11:49.933 "zoned": false, 00:11:49.933 "supported_io_types": { 00:11:49.933 "read": true, 00:11:49.933 "write": true, 00:11:49.933 "unmap": true, 00:11:49.933 "flush": true, 00:11:49.933 "reset": true, 00:11:49.933 "nvme_admin": false, 00:11:49.933 "nvme_io": false, 00:11:49.933 "nvme_io_md": false, 00:11:49.933 "write_zeroes": true, 00:11:49.933 "zcopy": true, 00:11:49.933 "get_zone_info": false, 00:11:49.933 "zone_management": false, 00:11:49.933 "zone_append": false, 00:11:49.933 "compare": false, 00:11:49.933 "compare_and_write": false, 00:11:49.933 "abort": true, 00:11:49.933 "seek_hole": false, 00:11:49.933 "seek_data": false, 00:11:49.933 "copy": true, 00:11:49.933 "nvme_iov_md": false 00:11:49.933 }, 00:11:49.933 "memory_domains": [ 00:11:49.933 { 00:11:49.933 "dma_device_id": "system", 00:11:49.933 "dma_device_type": 1 00:11:49.933 }, 00:11:49.933 { 00:11:49.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.933 "dma_device_type": 2 00:11:49.933 } 00:11:49.933 ], 00:11:49.933 "driver_specific": {} 00:11:49.933 } 00:11:49.933 ] 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:49.933 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.934 "name": "Existed_Raid", 00:11:49.934 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:49.934 "strip_size_kb": 0, 00:11:49.934 "state": "configuring", 00:11:49.934 "raid_level": "raid1", 00:11:49.934 "superblock": true, 00:11:49.934 "num_base_bdevs": 4, 00:11:49.934 "num_base_bdevs_discovered": 3, 00:11:49.934 "num_base_bdevs_operational": 4, 00:11:49.934 "base_bdevs_list": [ 00:11:49.934 { 00:11:49.934 "name": "BaseBdev1", 00:11:49.934 "uuid": "497759e0-03e9-4995-ba54-1459237b05fc", 00:11:49.934 "is_configured": true, 00:11:49.934 "data_offset": 2048, 00:11:49.934 "data_size": 63488 00:11:49.934 }, 00:11:49.934 { 00:11:49.934 "name": null, 00:11:49.934 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:49.934 "is_configured": false, 00:11:49.934 "data_offset": 0, 00:11:49.934 "data_size": 63488 00:11:49.934 }, 00:11:49.934 { 00:11:49.934 "name": "BaseBdev3", 00:11:49.934 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:49.934 "is_configured": true, 00:11:49.934 "data_offset": 2048, 00:11:49.934 "data_size": 63488 00:11:49.934 }, 00:11:49.934 { 00:11:49.934 "name": "BaseBdev4", 00:11:49.934 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:49.934 "is_configured": true, 00:11:49.934 "data_offset": 2048, 00:11:49.934 "data_size": 63488 00:11:49.934 } 00:11:49.934 ] 00:11:49.934 }' 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.934 21:43:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.193 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.193 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.193 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.193 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.453 [2024-09-29 21:43:09.205538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.453 "name": "Existed_Raid", 00:11:50.453 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:50.453 "strip_size_kb": 0, 00:11:50.453 "state": "configuring", 00:11:50.453 "raid_level": "raid1", 00:11:50.453 "superblock": true, 00:11:50.453 "num_base_bdevs": 4, 00:11:50.453 "num_base_bdevs_discovered": 2, 00:11:50.453 "num_base_bdevs_operational": 4, 00:11:50.453 "base_bdevs_list": [ 00:11:50.453 { 00:11:50.453 "name": "BaseBdev1", 00:11:50.453 "uuid": "497759e0-03e9-4995-ba54-1459237b05fc", 00:11:50.453 "is_configured": true, 00:11:50.453 "data_offset": 2048, 00:11:50.453 "data_size": 63488 00:11:50.453 }, 00:11:50.453 { 00:11:50.453 "name": null, 00:11:50.453 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:50.453 "is_configured": false, 00:11:50.453 "data_offset": 0, 00:11:50.453 "data_size": 63488 00:11:50.453 }, 00:11:50.453 { 00:11:50.453 "name": null, 00:11:50.453 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:50.453 "is_configured": false, 00:11:50.453 "data_offset": 0, 00:11:50.453 "data_size": 63488 00:11:50.453 }, 00:11:50.453 { 00:11:50.453 "name": "BaseBdev4", 00:11:50.453 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:50.453 "is_configured": true, 00:11:50.453 "data_offset": 2048, 00:11:50.453 "data_size": 63488 00:11:50.453 } 00:11:50.453 ] 00:11:50.453 }' 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.453 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.712 [2024-09-29 21:43:09.680719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:50.712 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.972 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.972 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.972 "name": "Existed_Raid", 00:11:50.972 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:50.972 "strip_size_kb": 0, 00:11:50.972 "state": "configuring", 00:11:50.972 "raid_level": "raid1", 00:11:50.972 "superblock": true, 00:11:50.972 "num_base_bdevs": 4, 00:11:50.972 "num_base_bdevs_discovered": 3, 00:11:50.972 "num_base_bdevs_operational": 4, 00:11:50.972 "base_bdevs_list": [ 00:11:50.972 { 00:11:50.972 "name": "BaseBdev1", 00:11:50.972 "uuid": "497759e0-03e9-4995-ba54-1459237b05fc", 00:11:50.972 "is_configured": true, 00:11:50.972 "data_offset": 2048, 00:11:50.972 "data_size": 63488 00:11:50.972 }, 00:11:50.972 { 00:11:50.972 "name": null, 00:11:50.972 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:50.972 "is_configured": false, 00:11:50.972 "data_offset": 0, 00:11:50.972 "data_size": 63488 00:11:50.972 }, 00:11:50.972 { 00:11:50.972 "name": "BaseBdev3", 00:11:50.972 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:50.972 "is_configured": true, 00:11:50.972 "data_offset": 2048, 00:11:50.972 "data_size": 63488 00:11:50.972 }, 00:11:50.972 { 00:11:50.972 "name": "BaseBdev4", 00:11:50.972 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:50.972 "is_configured": true, 00:11:50.972 "data_offset": 2048, 00:11:50.972 "data_size": 63488 00:11:50.972 } 00:11:50.972 ] 00:11:50.972 }' 00:11:50.972 21:43:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.972 21:43:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.231 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.231 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:51.231 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.231 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.231 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.231 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:51.231 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:51.231 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.231 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.231 [2024-09-29 21:43:10.152139] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.491 "name": "Existed_Raid", 00:11:51.491 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:51.491 "strip_size_kb": 0, 00:11:51.491 "state": "configuring", 00:11:51.491 "raid_level": "raid1", 00:11:51.491 "superblock": true, 00:11:51.491 "num_base_bdevs": 4, 00:11:51.491 "num_base_bdevs_discovered": 2, 00:11:51.491 "num_base_bdevs_operational": 4, 00:11:51.491 "base_bdevs_list": [ 00:11:51.491 { 00:11:51.491 "name": null, 00:11:51.491 "uuid": "497759e0-03e9-4995-ba54-1459237b05fc", 00:11:51.491 "is_configured": false, 00:11:51.491 "data_offset": 0, 00:11:51.491 "data_size": 63488 00:11:51.491 }, 00:11:51.491 { 00:11:51.491 "name": null, 00:11:51.491 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:51.491 "is_configured": false, 00:11:51.491 "data_offset": 0, 00:11:51.491 "data_size": 63488 00:11:51.491 }, 00:11:51.491 { 00:11:51.491 "name": "BaseBdev3", 00:11:51.491 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:51.491 "is_configured": true, 00:11:51.491 "data_offset": 2048, 00:11:51.491 "data_size": 63488 00:11:51.491 }, 00:11:51.491 { 00:11:51.491 "name": "BaseBdev4", 00:11:51.491 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:51.491 "is_configured": true, 00:11:51.491 "data_offset": 2048, 00:11:51.491 "data_size": 63488 00:11:51.491 } 00:11:51.491 ] 00:11:51.491 }' 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.491 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.751 [2024-09-29 21:43:10.708487] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.751 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.010 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.010 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.010 "name": "Existed_Raid", 00:11:52.010 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:52.010 "strip_size_kb": 0, 00:11:52.010 "state": "configuring", 00:11:52.010 "raid_level": "raid1", 00:11:52.010 "superblock": true, 00:11:52.010 "num_base_bdevs": 4, 00:11:52.010 "num_base_bdevs_discovered": 3, 00:11:52.010 "num_base_bdevs_operational": 4, 00:11:52.010 "base_bdevs_list": [ 00:11:52.010 { 00:11:52.010 "name": null, 00:11:52.010 "uuid": "497759e0-03e9-4995-ba54-1459237b05fc", 00:11:52.010 "is_configured": false, 00:11:52.010 "data_offset": 0, 00:11:52.010 "data_size": 63488 00:11:52.010 }, 00:11:52.010 { 00:11:52.010 "name": "BaseBdev2", 00:11:52.010 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:52.010 "is_configured": true, 00:11:52.010 "data_offset": 2048, 00:11:52.010 "data_size": 63488 00:11:52.010 }, 00:11:52.010 { 00:11:52.010 "name": "BaseBdev3", 00:11:52.010 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:52.010 "is_configured": true, 00:11:52.010 "data_offset": 2048, 00:11:52.010 "data_size": 63488 00:11:52.010 }, 00:11:52.010 { 00:11:52.010 "name": "BaseBdev4", 00:11:52.010 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:52.010 "is_configured": true, 00:11:52.010 "data_offset": 2048, 00:11:52.010 "data_size": 63488 00:11:52.010 } 00:11:52.010 ] 00:11:52.010 }' 00:11:52.010 21:43:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.010 21:43:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.269 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.269 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.269 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.269 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.269 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.269 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:52.269 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.270 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:52.270 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.270 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.270 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 497759e0-03e9-4995-ba54-1459237b05fc 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.529 [2024-09-29 21:43:11.301766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:52.529 [2024-09-29 21:43:11.302156] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:52.529 [2024-09-29 21:43:11.302213] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.529 [2024-09-29 21:43:11.302538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:52.529 [2024-09-29 21:43:11.302744] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:52.529 NewBaseBdev 00:11:52.529 [2024-09-29 21:43:11.302793] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:52.529 [2024-09-29 21:43:11.302994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.529 [ 00:11:52.529 { 00:11:52.529 "name": "NewBaseBdev", 00:11:52.529 "aliases": [ 00:11:52.529 "497759e0-03e9-4995-ba54-1459237b05fc" 00:11:52.529 ], 00:11:52.529 "product_name": "Malloc disk", 00:11:52.529 "block_size": 512, 00:11:52.529 "num_blocks": 65536, 00:11:52.529 "uuid": "497759e0-03e9-4995-ba54-1459237b05fc", 00:11:52.529 "assigned_rate_limits": { 00:11:52.529 "rw_ios_per_sec": 0, 00:11:52.529 "rw_mbytes_per_sec": 0, 00:11:52.529 "r_mbytes_per_sec": 0, 00:11:52.529 "w_mbytes_per_sec": 0 00:11:52.529 }, 00:11:52.529 "claimed": true, 00:11:52.529 "claim_type": "exclusive_write", 00:11:52.529 "zoned": false, 00:11:52.529 "supported_io_types": { 00:11:52.529 "read": true, 00:11:52.529 "write": true, 00:11:52.529 "unmap": true, 00:11:52.529 "flush": true, 00:11:52.529 "reset": true, 00:11:52.529 "nvme_admin": false, 00:11:52.529 "nvme_io": false, 00:11:52.529 "nvme_io_md": false, 00:11:52.529 "write_zeroes": true, 00:11:52.529 "zcopy": true, 00:11:52.529 "get_zone_info": false, 00:11:52.529 "zone_management": false, 00:11:52.529 "zone_append": false, 00:11:52.529 "compare": false, 00:11:52.529 "compare_and_write": false, 00:11:52.529 "abort": true, 00:11:52.529 "seek_hole": false, 00:11:52.529 "seek_data": false, 00:11:52.529 "copy": true, 00:11:52.529 "nvme_iov_md": false 00:11:52.529 }, 00:11:52.529 "memory_domains": [ 00:11:52.529 { 00:11:52.529 "dma_device_id": "system", 00:11:52.529 "dma_device_type": 1 00:11:52.529 }, 00:11:52.529 { 00:11:52.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.529 "dma_device_type": 2 00:11:52.529 } 00:11:52.529 ], 00:11:52.529 "driver_specific": {} 00:11:52.529 } 00:11:52.529 ] 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.529 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.530 "name": "Existed_Raid", 00:11:52.530 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:52.530 "strip_size_kb": 0, 00:11:52.530 "state": "online", 00:11:52.530 "raid_level": "raid1", 00:11:52.530 "superblock": true, 00:11:52.530 "num_base_bdevs": 4, 00:11:52.530 "num_base_bdevs_discovered": 4, 00:11:52.530 "num_base_bdevs_operational": 4, 00:11:52.530 "base_bdevs_list": [ 00:11:52.530 { 00:11:52.530 "name": "NewBaseBdev", 00:11:52.530 "uuid": "497759e0-03e9-4995-ba54-1459237b05fc", 00:11:52.530 "is_configured": true, 00:11:52.530 "data_offset": 2048, 00:11:52.530 "data_size": 63488 00:11:52.530 }, 00:11:52.530 { 00:11:52.530 "name": "BaseBdev2", 00:11:52.530 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:52.530 "is_configured": true, 00:11:52.530 "data_offset": 2048, 00:11:52.530 "data_size": 63488 00:11:52.530 }, 00:11:52.530 { 00:11:52.530 "name": "BaseBdev3", 00:11:52.530 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:52.530 "is_configured": true, 00:11:52.530 "data_offset": 2048, 00:11:52.530 "data_size": 63488 00:11:52.530 }, 00:11:52.530 { 00:11:52.530 "name": "BaseBdev4", 00:11:52.530 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:52.530 "is_configured": true, 00:11:52.530 "data_offset": 2048, 00:11:52.530 "data_size": 63488 00:11:52.530 } 00:11:52.530 ] 00:11:52.530 }' 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.530 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.790 [2024-09-29 21:43:11.709475] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:52.790 "name": "Existed_Raid", 00:11:52.790 "aliases": [ 00:11:52.790 "e7437a35-9571-482e-844a-d7d74114e005" 00:11:52.790 ], 00:11:52.790 "product_name": "Raid Volume", 00:11:52.790 "block_size": 512, 00:11:52.790 "num_blocks": 63488, 00:11:52.790 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:52.790 "assigned_rate_limits": { 00:11:52.790 "rw_ios_per_sec": 0, 00:11:52.790 "rw_mbytes_per_sec": 0, 00:11:52.790 "r_mbytes_per_sec": 0, 00:11:52.790 "w_mbytes_per_sec": 0 00:11:52.790 }, 00:11:52.790 "claimed": false, 00:11:52.790 "zoned": false, 00:11:52.790 "supported_io_types": { 00:11:52.790 "read": true, 00:11:52.790 "write": true, 00:11:52.790 "unmap": false, 00:11:52.790 "flush": false, 00:11:52.790 "reset": true, 00:11:52.790 "nvme_admin": false, 00:11:52.790 "nvme_io": false, 00:11:52.790 "nvme_io_md": false, 00:11:52.790 "write_zeroes": true, 00:11:52.790 "zcopy": false, 00:11:52.790 "get_zone_info": false, 00:11:52.790 "zone_management": false, 00:11:52.790 "zone_append": false, 00:11:52.790 "compare": false, 00:11:52.790 "compare_and_write": false, 00:11:52.790 "abort": false, 00:11:52.790 "seek_hole": false, 00:11:52.790 "seek_data": false, 00:11:52.790 "copy": false, 00:11:52.790 "nvme_iov_md": false 00:11:52.790 }, 00:11:52.790 "memory_domains": [ 00:11:52.790 { 00:11:52.790 "dma_device_id": "system", 00:11:52.790 "dma_device_type": 1 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.790 "dma_device_type": 2 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "dma_device_id": "system", 00:11:52.790 "dma_device_type": 1 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.790 "dma_device_type": 2 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "dma_device_id": "system", 00:11:52.790 "dma_device_type": 1 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.790 "dma_device_type": 2 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "dma_device_id": "system", 00:11:52.790 "dma_device_type": 1 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.790 "dma_device_type": 2 00:11:52.790 } 00:11:52.790 ], 00:11:52.790 "driver_specific": { 00:11:52.790 "raid": { 00:11:52.790 "uuid": "e7437a35-9571-482e-844a-d7d74114e005", 00:11:52.790 "strip_size_kb": 0, 00:11:52.790 "state": "online", 00:11:52.790 "raid_level": "raid1", 00:11:52.790 "superblock": true, 00:11:52.790 "num_base_bdevs": 4, 00:11:52.790 "num_base_bdevs_discovered": 4, 00:11:52.790 "num_base_bdevs_operational": 4, 00:11:52.790 "base_bdevs_list": [ 00:11:52.790 { 00:11:52.790 "name": "NewBaseBdev", 00:11:52.790 "uuid": "497759e0-03e9-4995-ba54-1459237b05fc", 00:11:52.790 "is_configured": true, 00:11:52.790 "data_offset": 2048, 00:11:52.790 "data_size": 63488 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "name": "BaseBdev2", 00:11:52.790 "uuid": "9e266e25-54c8-4026-942f-3defa034e45e", 00:11:52.790 "is_configured": true, 00:11:52.790 "data_offset": 2048, 00:11:52.790 "data_size": 63488 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "name": "BaseBdev3", 00:11:52.790 "uuid": "5eee9973-0b15-43ec-97f9-1d95fe912df4", 00:11:52.790 "is_configured": true, 00:11:52.790 "data_offset": 2048, 00:11:52.790 "data_size": 63488 00:11:52.790 }, 00:11:52.790 { 00:11:52.790 "name": "BaseBdev4", 00:11:52.790 "uuid": "7754a250-947e-44c3-8037-195ce11ef498", 00:11:52.790 "is_configured": true, 00:11:52.790 "data_offset": 2048, 00:11:52.790 "data_size": 63488 00:11:52.790 } 00:11:52.790 ] 00:11:52.790 } 00:11:52.790 } 00:11:52.790 }' 00:11:52.790 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:53.050 BaseBdev2 00:11:53.050 BaseBdev3 00:11:53.050 BaseBdev4' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.050 21:43:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.050 21:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.050 21:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.050 21:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:53.050 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.050 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.050 [2024-09-29 21:43:12.032543] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:53.310 [2024-09-29 21:43:12.032615] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.310 [2024-09-29 21:43:12.032710] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.310 [2024-09-29 21:43:12.033044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.310 [2024-09-29 21:43:12.033061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73937 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73937 ']' 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73937 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73937 00:11:53.310 killing process with pid 73937 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73937' 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73937 00:11:53.310 [2024-09-29 21:43:12.078001] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.310 21:43:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73937 00:11:53.570 [2024-09-29 21:43:12.495955] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.952 21:43:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:54.952 00:11:54.952 real 0m11.596s 00:11:54.952 user 0m17.899s 00:11:54.952 sys 0m2.271s 00:11:54.952 ************************************ 00:11:54.952 END TEST raid_state_function_test_sb 00:11:54.952 ************************************ 00:11:54.952 21:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.952 21:43:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.952 21:43:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:54.952 21:43:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:54.952 21:43:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.952 21:43:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.952 ************************************ 00:11:54.952 START TEST raid_superblock_test 00:11:54.952 ************************************ 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74608 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74608 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74608 ']' 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.952 21:43:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.211 [2024-09-29 21:43:14.014709] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:55.211 [2024-09-29 21:43:14.014948] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74608 ] 00:11:55.211 [2024-09-29 21:43:14.185027] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.471 [2024-09-29 21:43:14.434705] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.732 [2024-09-29 21:43:14.656603] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.732 [2024-09-29 21:43:14.656747] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.992 malloc1 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.992 [2024-09-29 21:43:14.897664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:55.992 [2024-09-29 21:43:14.897771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.992 [2024-09-29 21:43:14.897833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:55.992 [2024-09-29 21:43:14.897865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.992 [2024-09-29 21:43:14.900277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.992 [2024-09-29 21:43:14.900346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:55.992 pt1 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.992 malloc2 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.992 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.992 [2024-09-29 21:43:14.969165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:55.992 [2024-09-29 21:43:14.969220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.992 [2024-09-29 21:43:14.969259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:55.992 [2024-09-29 21:43:14.969269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.992 [2024-09-29 21:43:14.971696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.992 [2024-09-29 21:43:14.971731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:56.253 pt2 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.253 21:43:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.253 malloc3 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.253 [2024-09-29 21:43:15.030723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:56.253 [2024-09-29 21:43:15.030813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.253 [2024-09-29 21:43:15.030878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:56.253 [2024-09-29 21:43:15.030905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.253 [2024-09-29 21:43:15.033287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.253 [2024-09-29 21:43:15.033356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:56.253 pt3 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.253 malloc4 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.253 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.253 [2024-09-29 21:43:15.094352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:56.253 [2024-09-29 21:43:15.094439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.254 [2024-09-29 21:43:15.094488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:56.254 [2024-09-29 21:43:15.094515] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.254 [2024-09-29 21:43:15.096910] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.254 [2024-09-29 21:43:15.096978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:56.254 pt4 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.254 [2024-09-29 21:43:15.106392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:56.254 [2024-09-29 21:43:15.108506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.254 [2024-09-29 21:43:15.108626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:56.254 [2024-09-29 21:43:15.108687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:56.254 [2024-09-29 21:43:15.108915] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:56.254 [2024-09-29 21:43:15.108960] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.254 [2024-09-29 21:43:15.109255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:56.254 [2024-09-29 21:43:15.109470] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:56.254 [2024-09-29 21:43:15.109518] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:56.254 [2024-09-29 21:43:15.109698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.254 "name": "raid_bdev1", 00:11:56.254 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:56.254 "strip_size_kb": 0, 00:11:56.254 "state": "online", 00:11:56.254 "raid_level": "raid1", 00:11:56.254 "superblock": true, 00:11:56.254 "num_base_bdevs": 4, 00:11:56.254 "num_base_bdevs_discovered": 4, 00:11:56.254 "num_base_bdevs_operational": 4, 00:11:56.254 "base_bdevs_list": [ 00:11:56.254 { 00:11:56.254 "name": "pt1", 00:11:56.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.254 "is_configured": true, 00:11:56.254 "data_offset": 2048, 00:11:56.254 "data_size": 63488 00:11:56.254 }, 00:11:56.254 { 00:11:56.254 "name": "pt2", 00:11:56.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.254 "is_configured": true, 00:11:56.254 "data_offset": 2048, 00:11:56.254 "data_size": 63488 00:11:56.254 }, 00:11:56.254 { 00:11:56.254 "name": "pt3", 00:11:56.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.254 "is_configured": true, 00:11:56.254 "data_offset": 2048, 00:11:56.254 "data_size": 63488 00:11:56.254 }, 00:11:56.254 { 00:11:56.254 "name": "pt4", 00:11:56.254 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.254 "is_configured": true, 00:11:56.254 "data_offset": 2048, 00:11:56.254 "data_size": 63488 00:11:56.254 } 00:11:56.254 ] 00:11:56.254 }' 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.254 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.825 [2024-09-29 21:43:15.561903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:56.825 "name": "raid_bdev1", 00:11:56.825 "aliases": [ 00:11:56.825 "673f1bb0-c9b6-4ad7-be89-ceeb6835323d" 00:11:56.825 ], 00:11:56.825 "product_name": "Raid Volume", 00:11:56.825 "block_size": 512, 00:11:56.825 "num_blocks": 63488, 00:11:56.825 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:56.825 "assigned_rate_limits": { 00:11:56.825 "rw_ios_per_sec": 0, 00:11:56.825 "rw_mbytes_per_sec": 0, 00:11:56.825 "r_mbytes_per_sec": 0, 00:11:56.825 "w_mbytes_per_sec": 0 00:11:56.825 }, 00:11:56.825 "claimed": false, 00:11:56.825 "zoned": false, 00:11:56.825 "supported_io_types": { 00:11:56.825 "read": true, 00:11:56.825 "write": true, 00:11:56.825 "unmap": false, 00:11:56.825 "flush": false, 00:11:56.825 "reset": true, 00:11:56.825 "nvme_admin": false, 00:11:56.825 "nvme_io": false, 00:11:56.825 "nvme_io_md": false, 00:11:56.825 "write_zeroes": true, 00:11:56.825 "zcopy": false, 00:11:56.825 "get_zone_info": false, 00:11:56.825 "zone_management": false, 00:11:56.825 "zone_append": false, 00:11:56.825 "compare": false, 00:11:56.825 "compare_and_write": false, 00:11:56.825 "abort": false, 00:11:56.825 "seek_hole": false, 00:11:56.825 "seek_data": false, 00:11:56.825 "copy": false, 00:11:56.825 "nvme_iov_md": false 00:11:56.825 }, 00:11:56.825 "memory_domains": [ 00:11:56.825 { 00:11:56.825 "dma_device_id": "system", 00:11:56.825 "dma_device_type": 1 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.825 "dma_device_type": 2 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "dma_device_id": "system", 00:11:56.825 "dma_device_type": 1 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.825 "dma_device_type": 2 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "dma_device_id": "system", 00:11:56.825 "dma_device_type": 1 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.825 "dma_device_type": 2 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "dma_device_id": "system", 00:11:56.825 "dma_device_type": 1 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.825 "dma_device_type": 2 00:11:56.825 } 00:11:56.825 ], 00:11:56.825 "driver_specific": { 00:11:56.825 "raid": { 00:11:56.825 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:56.825 "strip_size_kb": 0, 00:11:56.825 "state": "online", 00:11:56.825 "raid_level": "raid1", 00:11:56.825 "superblock": true, 00:11:56.825 "num_base_bdevs": 4, 00:11:56.825 "num_base_bdevs_discovered": 4, 00:11:56.825 "num_base_bdevs_operational": 4, 00:11:56.825 "base_bdevs_list": [ 00:11:56.825 { 00:11:56.825 "name": "pt1", 00:11:56.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.825 "is_configured": true, 00:11:56.825 "data_offset": 2048, 00:11:56.825 "data_size": 63488 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "name": "pt2", 00:11:56.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.825 "is_configured": true, 00:11:56.825 "data_offset": 2048, 00:11:56.825 "data_size": 63488 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "name": "pt3", 00:11:56.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.825 "is_configured": true, 00:11:56.825 "data_offset": 2048, 00:11:56.825 "data_size": 63488 00:11:56.825 }, 00:11:56.825 { 00:11:56.825 "name": "pt4", 00:11:56.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.825 "is_configured": true, 00:11:56.825 "data_offset": 2048, 00:11:56.825 "data_size": 63488 00:11:56.825 } 00:11:56.825 ] 00:11:56.825 } 00:11:56.825 } 00:11:56.825 }' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:56.825 pt2 00:11:56.825 pt3 00:11:56.825 pt4' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.825 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.085 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 [2024-09-29 21:43:15.873336] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=673f1bb0-c9b6-4ad7-be89-ceeb6835323d 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 673f1bb0-c9b6-4ad7-be89-ceeb6835323d ']' 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 [2024-09-29 21:43:15.916956] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.086 [2024-09-29 21:43:15.917026] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.086 [2024-09-29 21:43:15.917128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.086 [2024-09-29 21:43:15.917224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.086 [2024-09-29 21:43:15.917241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 21:43:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:57.086 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.347 [2024-09-29 21:43:16.080695] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:57.347 [2024-09-29 21:43:16.082951] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:57.347 [2024-09-29 21:43:16.083002] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:57.347 [2024-09-29 21:43:16.083035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:57.347 [2024-09-29 21:43:16.083106] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:57.347 [2024-09-29 21:43:16.083155] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:57.347 [2024-09-29 21:43:16.083174] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:57.347 [2024-09-29 21:43:16.083192] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:57.347 [2024-09-29 21:43:16.083206] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:57.347 [2024-09-29 21:43:16.083217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:57.347 request: 00:11:57.347 { 00:11:57.347 "name": "raid_bdev1", 00:11:57.347 "raid_level": "raid1", 00:11:57.347 "base_bdevs": [ 00:11:57.347 "malloc1", 00:11:57.347 "malloc2", 00:11:57.347 "malloc3", 00:11:57.347 "malloc4" 00:11:57.347 ], 00:11:57.347 "superblock": false, 00:11:57.347 "method": "bdev_raid_create", 00:11:57.347 "req_id": 1 00:11:57.347 } 00:11:57.347 Got JSON-RPC error response 00:11:57.347 response: 00:11:57.347 { 00:11:57.347 "code": -17, 00:11:57.347 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:57.347 } 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.347 [2024-09-29 21:43:16.148544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:57.347 [2024-09-29 21:43:16.148633] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.347 [2024-09-29 21:43:16.148665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:57.347 [2024-09-29 21:43:16.148694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.347 [2024-09-29 21:43:16.151162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.347 [2024-09-29 21:43:16.151235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:57.347 [2024-09-29 21:43:16.151328] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:57.347 [2024-09-29 21:43:16.151395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:57.347 pt1 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.347 "name": "raid_bdev1", 00:11:57.347 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:57.347 "strip_size_kb": 0, 00:11:57.347 "state": "configuring", 00:11:57.347 "raid_level": "raid1", 00:11:57.347 "superblock": true, 00:11:57.347 "num_base_bdevs": 4, 00:11:57.347 "num_base_bdevs_discovered": 1, 00:11:57.347 "num_base_bdevs_operational": 4, 00:11:57.347 "base_bdevs_list": [ 00:11:57.347 { 00:11:57.347 "name": "pt1", 00:11:57.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.347 "is_configured": true, 00:11:57.347 "data_offset": 2048, 00:11:57.347 "data_size": 63488 00:11:57.347 }, 00:11:57.347 { 00:11:57.347 "name": null, 00:11:57.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.347 "is_configured": false, 00:11:57.347 "data_offset": 2048, 00:11:57.347 "data_size": 63488 00:11:57.347 }, 00:11:57.347 { 00:11:57.347 "name": null, 00:11:57.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.347 "is_configured": false, 00:11:57.347 "data_offset": 2048, 00:11:57.347 "data_size": 63488 00:11:57.347 }, 00:11:57.347 { 00:11:57.347 "name": null, 00:11:57.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.347 "is_configured": false, 00:11:57.347 "data_offset": 2048, 00:11:57.347 "data_size": 63488 00:11:57.347 } 00:11:57.347 ] 00:11:57.347 }' 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.347 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 [2024-09-29 21:43:16.599839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:57.918 [2024-09-29 21:43:16.599915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.918 [2024-09-29 21:43:16.599938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:57.918 [2024-09-29 21:43:16.599949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.918 [2024-09-29 21:43:16.600557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.918 [2024-09-29 21:43:16.600587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:57.918 [2024-09-29 21:43:16.600677] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:57.918 [2024-09-29 21:43:16.600715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:57.918 pt2 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 [2024-09-29 21:43:16.611810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.918 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.918 "name": "raid_bdev1", 00:11:57.918 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:57.918 "strip_size_kb": 0, 00:11:57.918 "state": "configuring", 00:11:57.918 "raid_level": "raid1", 00:11:57.918 "superblock": true, 00:11:57.918 "num_base_bdevs": 4, 00:11:57.918 "num_base_bdevs_discovered": 1, 00:11:57.918 "num_base_bdevs_operational": 4, 00:11:57.918 "base_bdevs_list": [ 00:11:57.919 { 00:11:57.919 "name": "pt1", 00:11:57.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:57.919 "is_configured": true, 00:11:57.919 "data_offset": 2048, 00:11:57.919 "data_size": 63488 00:11:57.919 }, 00:11:57.919 { 00:11:57.919 "name": null, 00:11:57.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:57.919 "is_configured": false, 00:11:57.919 "data_offset": 0, 00:11:57.919 "data_size": 63488 00:11:57.919 }, 00:11:57.919 { 00:11:57.919 "name": null, 00:11:57.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:57.919 "is_configured": false, 00:11:57.919 "data_offset": 2048, 00:11:57.919 "data_size": 63488 00:11:57.919 }, 00:11:57.919 { 00:11:57.919 "name": null, 00:11:57.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:57.919 "is_configured": false, 00:11:57.919 "data_offset": 2048, 00:11:57.919 "data_size": 63488 00:11:57.919 } 00:11:57.919 ] 00:11:57.919 }' 00:11:57.919 21:43:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.919 21:43:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.180 [2024-09-29 21:43:17.043055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:58.180 [2024-09-29 21:43:17.043155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.180 [2024-09-29 21:43:17.043196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:58.180 [2024-09-29 21:43:17.043228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.180 [2024-09-29 21:43:17.043731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.180 [2024-09-29 21:43:17.043754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:58.180 [2024-09-29 21:43:17.043837] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:58.180 [2024-09-29 21:43:17.043868] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:58.180 pt2 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.180 [2024-09-29 21:43:17.055021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:58.180 [2024-09-29 21:43:17.055094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.180 [2024-09-29 21:43:17.055111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:58.180 [2024-09-29 21:43:17.055120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.180 [2024-09-29 21:43:17.055522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.180 [2024-09-29 21:43:17.055546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:58.180 [2024-09-29 21:43:17.055605] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:58.180 [2024-09-29 21:43:17.055622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:58.180 pt3 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.180 [2024-09-29 21:43:17.066961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:58.180 [2024-09-29 21:43:17.067001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:58.180 [2024-09-29 21:43:17.067016] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:58.180 [2024-09-29 21:43:17.067023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:58.180 [2024-09-29 21:43:17.067409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:58.180 [2024-09-29 21:43:17.067426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:58.180 [2024-09-29 21:43:17.067481] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:58.180 [2024-09-29 21:43:17.067506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:58.180 [2024-09-29 21:43:17.067648] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.180 [2024-09-29 21:43:17.067656] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:58.180 [2024-09-29 21:43:17.067909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:58.180 [2024-09-29 21:43:17.068082] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.180 [2024-09-29 21:43:17.068101] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:58.180 [2024-09-29 21:43:17.068263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.180 pt4 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.180 "name": "raid_bdev1", 00:11:58.180 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:58.180 "strip_size_kb": 0, 00:11:58.180 "state": "online", 00:11:58.180 "raid_level": "raid1", 00:11:58.180 "superblock": true, 00:11:58.180 "num_base_bdevs": 4, 00:11:58.180 "num_base_bdevs_discovered": 4, 00:11:58.180 "num_base_bdevs_operational": 4, 00:11:58.180 "base_bdevs_list": [ 00:11:58.180 { 00:11:58.180 "name": "pt1", 00:11:58.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.180 "is_configured": true, 00:11:58.180 "data_offset": 2048, 00:11:58.180 "data_size": 63488 00:11:58.180 }, 00:11:58.180 { 00:11:58.180 "name": "pt2", 00:11:58.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.180 "is_configured": true, 00:11:58.180 "data_offset": 2048, 00:11:58.180 "data_size": 63488 00:11:58.180 }, 00:11:58.180 { 00:11:58.180 "name": "pt3", 00:11:58.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.180 "is_configured": true, 00:11:58.180 "data_offset": 2048, 00:11:58.180 "data_size": 63488 00:11:58.180 }, 00:11:58.180 { 00:11:58.180 "name": "pt4", 00:11:58.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:58.180 "is_configured": true, 00:11:58.180 "data_offset": 2048, 00:11:58.180 "data_size": 63488 00:11:58.180 } 00:11:58.180 ] 00:11:58.180 }' 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.180 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.751 [2024-09-29 21:43:17.502606] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:58.751 "name": "raid_bdev1", 00:11:58.751 "aliases": [ 00:11:58.751 "673f1bb0-c9b6-4ad7-be89-ceeb6835323d" 00:11:58.751 ], 00:11:58.751 "product_name": "Raid Volume", 00:11:58.751 "block_size": 512, 00:11:58.751 "num_blocks": 63488, 00:11:58.751 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:58.751 "assigned_rate_limits": { 00:11:58.751 "rw_ios_per_sec": 0, 00:11:58.751 "rw_mbytes_per_sec": 0, 00:11:58.751 "r_mbytes_per_sec": 0, 00:11:58.751 "w_mbytes_per_sec": 0 00:11:58.751 }, 00:11:58.751 "claimed": false, 00:11:58.751 "zoned": false, 00:11:58.751 "supported_io_types": { 00:11:58.751 "read": true, 00:11:58.751 "write": true, 00:11:58.751 "unmap": false, 00:11:58.751 "flush": false, 00:11:58.751 "reset": true, 00:11:58.751 "nvme_admin": false, 00:11:58.751 "nvme_io": false, 00:11:58.751 "nvme_io_md": false, 00:11:58.751 "write_zeroes": true, 00:11:58.751 "zcopy": false, 00:11:58.751 "get_zone_info": false, 00:11:58.751 "zone_management": false, 00:11:58.751 "zone_append": false, 00:11:58.751 "compare": false, 00:11:58.751 "compare_and_write": false, 00:11:58.751 "abort": false, 00:11:58.751 "seek_hole": false, 00:11:58.751 "seek_data": false, 00:11:58.751 "copy": false, 00:11:58.751 "nvme_iov_md": false 00:11:58.751 }, 00:11:58.751 "memory_domains": [ 00:11:58.751 { 00:11:58.751 "dma_device_id": "system", 00:11:58.751 "dma_device_type": 1 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.751 "dma_device_type": 2 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "dma_device_id": "system", 00:11:58.751 "dma_device_type": 1 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.751 "dma_device_type": 2 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "dma_device_id": "system", 00:11:58.751 "dma_device_type": 1 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.751 "dma_device_type": 2 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "dma_device_id": "system", 00:11:58.751 "dma_device_type": 1 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.751 "dma_device_type": 2 00:11:58.751 } 00:11:58.751 ], 00:11:58.751 "driver_specific": { 00:11:58.751 "raid": { 00:11:58.751 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:58.751 "strip_size_kb": 0, 00:11:58.751 "state": "online", 00:11:58.751 "raid_level": "raid1", 00:11:58.751 "superblock": true, 00:11:58.751 "num_base_bdevs": 4, 00:11:58.751 "num_base_bdevs_discovered": 4, 00:11:58.751 "num_base_bdevs_operational": 4, 00:11:58.751 "base_bdevs_list": [ 00:11:58.751 { 00:11:58.751 "name": "pt1", 00:11:58.751 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:58.751 "is_configured": true, 00:11:58.751 "data_offset": 2048, 00:11:58.751 "data_size": 63488 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "name": "pt2", 00:11:58.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:58.751 "is_configured": true, 00:11:58.751 "data_offset": 2048, 00:11:58.751 "data_size": 63488 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "name": "pt3", 00:11:58.751 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:58.751 "is_configured": true, 00:11:58.751 "data_offset": 2048, 00:11:58.751 "data_size": 63488 00:11:58.751 }, 00:11:58.751 { 00:11:58.751 "name": "pt4", 00:11:58.751 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:58.751 "is_configured": true, 00:11:58.751 "data_offset": 2048, 00:11:58.751 "data_size": 63488 00:11:58.751 } 00:11:58.751 ] 00:11:58.751 } 00:11:58.751 } 00:11:58.751 }' 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:58.751 pt2 00:11:58.751 pt3 00:11:58.751 pt4' 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.751 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.011 [2024-09-29 21:43:17.845912] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 673f1bb0-c9b6-4ad7-be89-ceeb6835323d '!=' 673f1bb0-c9b6-4ad7-be89-ceeb6835323d ']' 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.011 [2024-09-29 21:43:17.893601] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.011 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.012 "name": "raid_bdev1", 00:11:59.012 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:59.012 "strip_size_kb": 0, 00:11:59.012 "state": "online", 00:11:59.012 "raid_level": "raid1", 00:11:59.012 "superblock": true, 00:11:59.012 "num_base_bdevs": 4, 00:11:59.012 "num_base_bdevs_discovered": 3, 00:11:59.012 "num_base_bdevs_operational": 3, 00:11:59.012 "base_bdevs_list": [ 00:11:59.012 { 00:11:59.012 "name": null, 00:11:59.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.012 "is_configured": false, 00:11:59.012 "data_offset": 0, 00:11:59.012 "data_size": 63488 00:11:59.012 }, 00:11:59.012 { 00:11:59.012 "name": "pt2", 00:11:59.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.012 "is_configured": true, 00:11:59.012 "data_offset": 2048, 00:11:59.012 "data_size": 63488 00:11:59.012 }, 00:11:59.012 { 00:11:59.012 "name": "pt3", 00:11:59.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.012 "is_configured": true, 00:11:59.012 "data_offset": 2048, 00:11:59.012 "data_size": 63488 00:11:59.012 }, 00:11:59.012 { 00:11:59.012 "name": "pt4", 00:11:59.012 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:59.012 "is_configured": true, 00:11:59.012 "data_offset": 2048, 00:11:59.012 "data_size": 63488 00:11:59.012 } 00:11:59.012 ] 00:11:59.012 }' 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.012 21:43:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.581 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:59.581 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.581 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.581 [2024-09-29 21:43:18.360777] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.581 [2024-09-29 21:43:18.360849] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.581 [2024-09-29 21:43:18.360943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.581 [2024-09-29 21:43:18.361046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.581 [2024-09-29 21:43:18.361092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:59.581 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.581 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.581 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.581 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:59.581 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.581 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.582 [2024-09-29 21:43:18.456612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:59.582 [2024-09-29 21:43:18.456661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.582 [2024-09-29 21:43:18.456680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:59.582 [2024-09-29 21:43:18.456689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.582 [2024-09-29 21:43:18.459184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.582 [2024-09-29 21:43:18.459218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:59.582 [2024-09-29 21:43:18.459295] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:59.582 [2024-09-29 21:43:18.459345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:59.582 pt2 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.582 "name": "raid_bdev1", 00:11:59.582 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:11:59.582 "strip_size_kb": 0, 00:11:59.582 "state": "configuring", 00:11:59.582 "raid_level": "raid1", 00:11:59.582 "superblock": true, 00:11:59.582 "num_base_bdevs": 4, 00:11:59.582 "num_base_bdevs_discovered": 1, 00:11:59.582 "num_base_bdevs_operational": 3, 00:11:59.582 "base_bdevs_list": [ 00:11:59.582 { 00:11:59.582 "name": null, 00:11:59.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.582 "is_configured": false, 00:11:59.582 "data_offset": 2048, 00:11:59.582 "data_size": 63488 00:11:59.582 }, 00:11:59.582 { 00:11:59.582 "name": "pt2", 00:11:59.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:59.582 "is_configured": true, 00:11:59.582 "data_offset": 2048, 00:11:59.582 "data_size": 63488 00:11:59.582 }, 00:11:59.582 { 00:11:59.582 "name": null, 00:11:59.582 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:59.582 "is_configured": false, 00:11:59.582 "data_offset": 2048, 00:11:59.582 "data_size": 63488 00:11:59.582 }, 00:11:59.582 { 00:11:59.582 "name": null, 00:11:59.582 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:59.582 "is_configured": false, 00:11:59.582 "data_offset": 2048, 00:11:59.582 "data_size": 63488 00:11:59.582 } 00:11:59.582 ] 00:11:59.582 }' 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.582 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.152 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:00.152 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:00.152 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:00.152 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.153 [2024-09-29 21:43:18.836047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:00.153 [2024-09-29 21:43:18.836158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.153 [2024-09-29 21:43:18.836217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:00.153 [2024-09-29 21:43:18.836247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.153 [2024-09-29 21:43:18.836805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.153 [2024-09-29 21:43:18.836866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:00.153 [2024-09-29 21:43:18.836992] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:00.153 [2024-09-29 21:43:18.837066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:00.153 pt3 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.153 "name": "raid_bdev1", 00:12:00.153 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:12:00.153 "strip_size_kb": 0, 00:12:00.153 "state": "configuring", 00:12:00.153 "raid_level": "raid1", 00:12:00.153 "superblock": true, 00:12:00.153 "num_base_bdevs": 4, 00:12:00.153 "num_base_bdevs_discovered": 2, 00:12:00.153 "num_base_bdevs_operational": 3, 00:12:00.153 "base_bdevs_list": [ 00:12:00.153 { 00:12:00.153 "name": null, 00:12:00.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.153 "is_configured": false, 00:12:00.153 "data_offset": 2048, 00:12:00.153 "data_size": 63488 00:12:00.153 }, 00:12:00.153 { 00:12:00.153 "name": "pt2", 00:12:00.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.153 "is_configured": true, 00:12:00.153 "data_offset": 2048, 00:12:00.153 "data_size": 63488 00:12:00.153 }, 00:12:00.153 { 00:12:00.153 "name": "pt3", 00:12:00.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.153 "is_configured": true, 00:12:00.153 "data_offset": 2048, 00:12:00.153 "data_size": 63488 00:12:00.153 }, 00:12:00.153 { 00:12:00.153 "name": null, 00:12:00.153 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.153 "is_configured": false, 00:12:00.153 "data_offset": 2048, 00:12:00.153 "data_size": 63488 00:12:00.153 } 00:12:00.153 ] 00:12:00.153 }' 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.153 21:43:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.413 [2024-09-29 21:43:19.307184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:00.413 [2024-09-29 21:43:19.307281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.413 [2024-09-29 21:43:19.307307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:00.413 [2024-09-29 21:43:19.307316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.413 [2024-09-29 21:43:19.307814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.413 [2024-09-29 21:43:19.307832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:00.413 [2024-09-29 21:43:19.307911] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:00.413 [2024-09-29 21:43:19.307942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:00.413 [2024-09-29 21:43:19.308104] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:00.413 [2024-09-29 21:43:19.308113] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:00.413 [2024-09-29 21:43:19.308387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:00.413 [2024-09-29 21:43:19.308546] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:00.413 [2024-09-29 21:43:19.308559] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:00.413 [2024-09-29 21:43:19.308697] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.413 pt4 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.413 "name": "raid_bdev1", 00:12:00.413 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:12:00.413 "strip_size_kb": 0, 00:12:00.413 "state": "online", 00:12:00.413 "raid_level": "raid1", 00:12:00.413 "superblock": true, 00:12:00.413 "num_base_bdevs": 4, 00:12:00.413 "num_base_bdevs_discovered": 3, 00:12:00.413 "num_base_bdevs_operational": 3, 00:12:00.413 "base_bdevs_list": [ 00:12:00.413 { 00:12:00.413 "name": null, 00:12:00.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.413 "is_configured": false, 00:12:00.413 "data_offset": 2048, 00:12:00.413 "data_size": 63488 00:12:00.413 }, 00:12:00.413 { 00:12:00.413 "name": "pt2", 00:12:00.413 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.413 "is_configured": true, 00:12:00.413 "data_offset": 2048, 00:12:00.413 "data_size": 63488 00:12:00.413 }, 00:12:00.413 { 00:12:00.413 "name": "pt3", 00:12:00.413 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.413 "is_configured": true, 00:12:00.413 "data_offset": 2048, 00:12:00.413 "data_size": 63488 00:12:00.413 }, 00:12:00.413 { 00:12:00.413 "name": "pt4", 00:12:00.413 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.413 "is_configured": true, 00:12:00.413 "data_offset": 2048, 00:12:00.413 "data_size": 63488 00:12:00.413 } 00:12:00.413 ] 00:12:00.413 }' 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.413 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 [2024-09-29 21:43:19.782345] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.983 [2024-09-29 21:43:19.782414] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.983 [2024-09-29 21:43:19.782516] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.983 [2024-09-29 21:43:19.782625] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.983 [2024-09-29 21:43:19.782674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 [2024-09-29 21:43:19.854221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:00.983 [2024-09-29 21:43:19.854331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.983 [2024-09-29 21:43:19.854367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:00.983 [2024-09-29 21:43:19.854397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.983 [2024-09-29 21:43:19.856941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.983 [2024-09-29 21:43:19.857017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:00.983 [2024-09-29 21:43:19.857141] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:00.983 [2024-09-29 21:43:19.857221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:00.983 [2024-09-29 21:43:19.857389] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:00.983 [2024-09-29 21:43:19.857445] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.983 [2024-09-29 21:43:19.857494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:00.983 [2024-09-29 21:43:19.857592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:00.983 [2024-09-29 21:43:19.857736] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:00.983 pt1 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.983 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.984 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.984 "name": "raid_bdev1", 00:12:00.984 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:12:00.984 "strip_size_kb": 0, 00:12:00.984 "state": "configuring", 00:12:00.984 "raid_level": "raid1", 00:12:00.984 "superblock": true, 00:12:00.984 "num_base_bdevs": 4, 00:12:00.984 "num_base_bdevs_discovered": 2, 00:12:00.984 "num_base_bdevs_operational": 3, 00:12:00.984 "base_bdevs_list": [ 00:12:00.984 { 00:12:00.984 "name": null, 00:12:00.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.984 "is_configured": false, 00:12:00.984 "data_offset": 2048, 00:12:00.984 "data_size": 63488 00:12:00.984 }, 00:12:00.984 { 00:12:00.984 "name": "pt2", 00:12:00.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:00.984 "is_configured": true, 00:12:00.984 "data_offset": 2048, 00:12:00.984 "data_size": 63488 00:12:00.984 }, 00:12:00.984 { 00:12:00.984 "name": "pt3", 00:12:00.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:00.984 "is_configured": true, 00:12:00.984 "data_offset": 2048, 00:12:00.984 "data_size": 63488 00:12:00.984 }, 00:12:00.984 { 00:12:00.984 "name": null, 00:12:00.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:00.984 "is_configured": false, 00:12:00.984 "data_offset": 2048, 00:12:00.984 "data_size": 63488 00:12:00.984 } 00:12:00.984 ] 00:12:00.984 }' 00:12:00.984 21:43:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.984 21:43:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.554 [2024-09-29 21:43:20.293478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:01.554 [2024-09-29 21:43:20.293542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.554 [2024-09-29 21:43:20.293563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:01.554 [2024-09-29 21:43:20.293572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.554 [2024-09-29 21:43:20.293983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.554 [2024-09-29 21:43:20.294000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:01.554 [2024-09-29 21:43:20.294082] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:01.554 [2024-09-29 21:43:20.294103] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:01.554 [2024-09-29 21:43:20.294232] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:01.554 [2024-09-29 21:43:20.294240] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.554 [2024-09-29 21:43:20.294501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:01.554 [2024-09-29 21:43:20.294655] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:01.554 [2024-09-29 21:43:20.294673] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:01.554 [2024-09-29 21:43:20.294812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.554 pt4 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.554 "name": "raid_bdev1", 00:12:01.554 "uuid": "673f1bb0-c9b6-4ad7-be89-ceeb6835323d", 00:12:01.554 "strip_size_kb": 0, 00:12:01.554 "state": "online", 00:12:01.554 "raid_level": "raid1", 00:12:01.554 "superblock": true, 00:12:01.554 "num_base_bdevs": 4, 00:12:01.554 "num_base_bdevs_discovered": 3, 00:12:01.554 "num_base_bdevs_operational": 3, 00:12:01.554 "base_bdevs_list": [ 00:12:01.554 { 00:12:01.554 "name": null, 00:12:01.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.554 "is_configured": false, 00:12:01.554 "data_offset": 2048, 00:12:01.554 "data_size": 63488 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "name": "pt2", 00:12:01.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:01.554 "is_configured": true, 00:12:01.554 "data_offset": 2048, 00:12:01.554 "data_size": 63488 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "name": "pt3", 00:12:01.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:01.554 "is_configured": true, 00:12:01.554 "data_offset": 2048, 00:12:01.554 "data_size": 63488 00:12:01.554 }, 00:12:01.554 { 00:12:01.554 "name": "pt4", 00:12:01.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:01.554 "is_configured": true, 00:12:01.554 "data_offset": 2048, 00:12:01.554 "data_size": 63488 00:12:01.554 } 00:12:01.554 ] 00:12:01.554 }' 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.554 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.814 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.814 [2024-09-29 21:43:20.776900] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 673f1bb0-c9b6-4ad7-be89-ceeb6835323d '!=' 673f1bb0-c9b6-4ad7-be89-ceeb6835323d ']' 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74608 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74608 ']' 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74608 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74608 00:12:02.075 killing process with pid 74608 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74608' 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74608 00:12:02.075 21:43:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74608 00:12:02.075 [2024-09-29 21:43:20.849088] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:02.075 [2024-09-29 21:43:20.849201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:02.075 [2024-09-29 21:43:20.849284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:02.075 [2024-09-29 21:43:20.849298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:02.335 [2024-09-29 21:43:21.263304] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.717 ************************************ 00:12:03.717 END TEST raid_superblock_test 00:12:03.717 ************************************ 00:12:03.717 21:43:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:03.717 00:12:03.717 real 0m8.672s 00:12:03.717 user 0m13.320s 00:12:03.717 sys 0m1.689s 00:12:03.717 21:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.717 21:43:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.717 21:43:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:03.717 21:43:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:03.717 21:43:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.717 21:43:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.717 ************************************ 00:12:03.717 START TEST raid_read_error_test 00:12:03.717 ************************************ 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2E5MeK1f7u 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75095 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75095 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75095 ']' 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.717 21:43:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.977 [2024-09-29 21:43:22.778417] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:03.977 [2024-09-29 21:43:22.778537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75095 ] 00:12:03.977 [2024-09-29 21:43:22.946795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.238 [2024-09-29 21:43:23.194532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.498 [2024-09-29 21:43:23.424737] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.498 [2024-09-29 21:43:23.424777] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.758 BaseBdev1_malloc 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.758 true 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.758 [2024-09-29 21:43:23.656221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:04.758 [2024-09-29 21:43:23.656324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.758 [2024-09-29 21:43:23.656377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:04.758 [2024-09-29 21:43:23.656391] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.758 [2024-09-29 21:43:23.658802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.758 [2024-09-29 21:43:23.658842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:04.758 BaseBdev1 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.758 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 BaseBdev2_malloc 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 true 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 [2024-09-29 21:43:23.759018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:05.019 [2024-09-29 21:43:23.759084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.019 [2024-09-29 21:43:23.759100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:05.019 [2024-09-29 21:43:23.759111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.019 [2024-09-29 21:43:23.761504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.019 [2024-09-29 21:43:23.761599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:05.019 BaseBdev2 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 BaseBdev3_malloc 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 true 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 [2024-09-29 21:43:23.830918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:05.019 [2024-09-29 21:43:23.830969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.019 [2024-09-29 21:43:23.830984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:05.019 [2024-09-29 21:43:23.830995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.019 [2024-09-29 21:43:23.833366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.019 [2024-09-29 21:43:23.833441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:05.019 BaseBdev3 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 BaseBdev4_malloc 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 true 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.019 [2024-09-29 21:43:23.902554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:05.019 [2024-09-29 21:43:23.902603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.019 [2024-09-29 21:43:23.902620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:05.019 [2024-09-29 21:43:23.902633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.019 [2024-09-29 21:43:23.904976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.019 [2024-09-29 21:43:23.905017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:05.019 BaseBdev4 00:12:05.019 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.020 [2024-09-29 21:43:23.914614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.020 [2024-09-29 21:43:23.916704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.020 [2024-09-29 21:43:23.916785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.020 [2024-09-29 21:43:23.916845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.020 [2024-09-29 21:43:23.917088] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:05.020 [2024-09-29 21:43:23.917105] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:05.020 [2024-09-29 21:43:23.917364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:05.020 [2024-09-29 21:43:23.917545] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:05.020 [2024-09-29 21:43:23.917555] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:05.020 [2024-09-29 21:43:23.917693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.020 "name": "raid_bdev1", 00:12:05.020 "uuid": "756c5df3-7b2c-4af3-ae17-f8573512d373", 00:12:05.020 "strip_size_kb": 0, 00:12:05.020 "state": "online", 00:12:05.020 "raid_level": "raid1", 00:12:05.020 "superblock": true, 00:12:05.020 "num_base_bdevs": 4, 00:12:05.020 "num_base_bdevs_discovered": 4, 00:12:05.020 "num_base_bdevs_operational": 4, 00:12:05.020 "base_bdevs_list": [ 00:12:05.020 { 00:12:05.020 "name": "BaseBdev1", 00:12:05.020 "uuid": "44f3cc17-0bcd-540d-b697-a48510d74ee6", 00:12:05.020 "is_configured": true, 00:12:05.020 "data_offset": 2048, 00:12:05.020 "data_size": 63488 00:12:05.020 }, 00:12:05.020 { 00:12:05.020 "name": "BaseBdev2", 00:12:05.020 "uuid": "487ec073-a397-5e87-911f-d69667882e42", 00:12:05.020 "is_configured": true, 00:12:05.020 "data_offset": 2048, 00:12:05.020 "data_size": 63488 00:12:05.020 }, 00:12:05.020 { 00:12:05.020 "name": "BaseBdev3", 00:12:05.020 "uuid": "0e18ac21-9189-5fec-a1d1-fd15f624dfa2", 00:12:05.020 "is_configured": true, 00:12:05.020 "data_offset": 2048, 00:12:05.020 "data_size": 63488 00:12:05.020 }, 00:12:05.020 { 00:12:05.020 "name": "BaseBdev4", 00:12:05.020 "uuid": "7421078f-e455-55f8-bf82-d65148081847", 00:12:05.020 "is_configured": true, 00:12:05.020 "data_offset": 2048, 00:12:05.020 "data_size": 63488 00:12:05.020 } 00:12:05.020 ] 00:12:05.020 }' 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.020 21:43:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.591 21:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:05.591 21:43:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:05.591 [2024-09-29 21:43:24.447163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:06.531 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:06.531 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.531 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.531 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.531 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.532 "name": "raid_bdev1", 00:12:06.532 "uuid": "756c5df3-7b2c-4af3-ae17-f8573512d373", 00:12:06.532 "strip_size_kb": 0, 00:12:06.532 "state": "online", 00:12:06.532 "raid_level": "raid1", 00:12:06.532 "superblock": true, 00:12:06.532 "num_base_bdevs": 4, 00:12:06.532 "num_base_bdevs_discovered": 4, 00:12:06.532 "num_base_bdevs_operational": 4, 00:12:06.532 "base_bdevs_list": [ 00:12:06.532 { 00:12:06.532 "name": "BaseBdev1", 00:12:06.532 "uuid": "44f3cc17-0bcd-540d-b697-a48510d74ee6", 00:12:06.532 "is_configured": true, 00:12:06.532 "data_offset": 2048, 00:12:06.532 "data_size": 63488 00:12:06.532 }, 00:12:06.532 { 00:12:06.532 "name": "BaseBdev2", 00:12:06.532 "uuid": "487ec073-a397-5e87-911f-d69667882e42", 00:12:06.532 "is_configured": true, 00:12:06.532 "data_offset": 2048, 00:12:06.532 "data_size": 63488 00:12:06.532 }, 00:12:06.532 { 00:12:06.532 "name": "BaseBdev3", 00:12:06.532 "uuid": "0e18ac21-9189-5fec-a1d1-fd15f624dfa2", 00:12:06.532 "is_configured": true, 00:12:06.532 "data_offset": 2048, 00:12:06.532 "data_size": 63488 00:12:06.532 }, 00:12:06.532 { 00:12:06.532 "name": "BaseBdev4", 00:12:06.532 "uuid": "7421078f-e455-55f8-bf82-d65148081847", 00:12:06.532 "is_configured": true, 00:12:06.532 "data_offset": 2048, 00:12:06.532 "data_size": 63488 00:12:06.532 } 00:12:06.532 ] 00:12:06.532 }' 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.532 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.101 [2024-09-29 21:43:25.810249] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.101 [2024-09-29 21:43:25.810362] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.101 [2024-09-29 21:43:25.813007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.101 [2024-09-29 21:43:25.813135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.101 [2024-09-29 21:43:25.813292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.101 [2024-09-29 21:43:25.813349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.101 { 00:12:07.101 "results": [ 00:12:07.101 { 00:12:07.101 "job": "raid_bdev1", 00:12:07.101 "core_mask": "0x1", 00:12:07.101 "workload": "randrw", 00:12:07.101 "percentage": 50, 00:12:07.101 "status": "finished", 00:12:07.101 "queue_depth": 1, 00:12:07.101 "io_size": 131072, 00:12:07.101 "runtime": 1.363721, 00:12:07.101 "iops": 8180.55892664262, 00:12:07.101 "mibps": 1022.5698658303274, 00:12:07.101 "io_failed": 0, 00:12:07.101 "io_timeout": 0, 00:12:07.101 "avg_latency_us": 119.75823924619647, 00:12:07.101 "min_latency_us": 22.46986899563319, 00:12:07.101 "max_latency_us": 1330.7528384279476 00:12:07.101 } 00:12:07.101 ], 00:12:07.101 "core_count": 1 00:12:07.101 } 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75095 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75095 ']' 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75095 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75095 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75095' 00:12:07.101 killing process with pid 75095 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75095 00:12:07.101 [2024-09-29 21:43:25.860071] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.101 21:43:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75095 00:12:07.362 [2024-09-29 21:43:26.201898] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2E5MeK1f7u 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:08.747 ************************************ 00:12:08.747 END TEST raid_read_error_test 00:12:08.747 ************************************ 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:08.747 00:12:08.747 real 0m4.944s 00:12:08.747 user 0m5.632s 00:12:08.747 sys 0m0.720s 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.747 21:43:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.747 21:43:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:08.747 21:43:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:08.747 21:43:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.747 21:43:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.747 ************************************ 00:12:08.747 START TEST raid_write_error_test 00:12:08.747 ************************************ 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.W7vecwgdz6 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75246 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75246 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75246 ']' 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.747 21:43:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.007 [2024-09-29 21:43:27.798893] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:09.007 [2024-09-29 21:43:27.799101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75246 ] 00:12:09.007 [2024-09-29 21:43:27.963610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.267 [2024-09-29 21:43:28.211001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.528 [2024-09-29 21:43:28.441036] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.528 [2024-09-29 21:43:28.441173] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.788 BaseBdev1_malloc 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.788 true 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.788 [2024-09-29 21:43:28.675718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:09.788 [2024-09-29 21:43:28.675812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.788 [2024-09-29 21:43:28.675864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:09.788 [2024-09-29 21:43:28.675894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.788 [2024-09-29 21:43:28.678331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.788 [2024-09-29 21:43:28.678422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:09.788 BaseBdev1 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.788 BaseBdev2_malloc 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.788 true 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.788 [2024-09-29 21:43:28.758099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:09.788 [2024-09-29 21:43:28.758154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.788 [2024-09-29 21:43:28.758171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:09.788 [2024-09-29 21:43:28.758182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.788 [2024-09-29 21:43:28.760546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.788 [2024-09-29 21:43:28.760587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:09.788 BaseBdev2 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.788 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 BaseBdev3_malloc 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 true 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 [2024-09-29 21:43:28.831594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:10.049 [2024-09-29 21:43:28.831648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.049 [2024-09-29 21:43:28.831666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:10.049 [2024-09-29 21:43:28.831676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.049 [2024-09-29 21:43:28.834109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.049 [2024-09-29 21:43:28.834144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:10.049 BaseBdev3 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 BaseBdev4_malloc 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 true 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 [2024-09-29 21:43:28.903727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:10.049 [2024-09-29 21:43:28.903780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.049 [2024-09-29 21:43:28.903797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:10.049 [2024-09-29 21:43:28.903808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.049 [2024-09-29 21:43:28.906207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.049 [2024-09-29 21:43:28.906245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:10.049 BaseBdev4 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 [2024-09-29 21:43:28.915791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.049 [2024-09-29 21:43:28.917845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.049 [2024-09-29 21:43:28.917919] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.049 [2024-09-29 21:43:28.917973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:10.049 [2024-09-29 21:43:28.918292] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:10.049 [2024-09-29 21:43:28.918341] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.049 [2024-09-29 21:43:28.918615] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:10.049 [2024-09-29 21:43:28.918835] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:10.049 [2024-09-29 21:43:28.918876] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:10.049 [2024-09-29 21:43:28.919082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.049 "name": "raid_bdev1", 00:12:10.049 "uuid": "9d3370e1-b178-4658-8b7a-705cc0e459a3", 00:12:10.049 "strip_size_kb": 0, 00:12:10.049 "state": "online", 00:12:10.049 "raid_level": "raid1", 00:12:10.049 "superblock": true, 00:12:10.049 "num_base_bdevs": 4, 00:12:10.049 "num_base_bdevs_discovered": 4, 00:12:10.049 "num_base_bdevs_operational": 4, 00:12:10.049 "base_bdevs_list": [ 00:12:10.049 { 00:12:10.049 "name": "BaseBdev1", 00:12:10.049 "uuid": "ef0917c5-90ab-58ed-8798-e6976832f94e", 00:12:10.049 "is_configured": true, 00:12:10.049 "data_offset": 2048, 00:12:10.049 "data_size": 63488 00:12:10.049 }, 00:12:10.049 { 00:12:10.049 "name": "BaseBdev2", 00:12:10.049 "uuid": "8b91538f-c32d-55af-896b-ed9c715e9d5c", 00:12:10.049 "is_configured": true, 00:12:10.049 "data_offset": 2048, 00:12:10.049 "data_size": 63488 00:12:10.049 }, 00:12:10.049 { 00:12:10.049 "name": "BaseBdev3", 00:12:10.049 "uuid": "cfc5ac71-32eb-5117-a124-0a6125a5cbe9", 00:12:10.049 "is_configured": true, 00:12:10.049 "data_offset": 2048, 00:12:10.049 "data_size": 63488 00:12:10.049 }, 00:12:10.049 { 00:12:10.049 "name": "BaseBdev4", 00:12:10.049 "uuid": "0643747e-dfa6-50d9-986d-ddaec4d2d77c", 00:12:10.049 "is_configured": true, 00:12:10.049 "data_offset": 2048, 00:12:10.049 "data_size": 63488 00:12:10.049 } 00:12:10.049 ] 00:12:10.049 }' 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.049 21:43:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.619 21:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:10.619 21:43:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:10.619 [2024-09-29 21:43:29.448222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.558 [2024-09-29 21:43:30.389781] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:11.558 [2024-09-29 21:43:30.389949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.558 [2024-09-29 21:43:30.390222] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.558 "name": "raid_bdev1", 00:12:11.558 "uuid": "9d3370e1-b178-4658-8b7a-705cc0e459a3", 00:12:11.558 "strip_size_kb": 0, 00:12:11.558 "state": "online", 00:12:11.558 "raid_level": "raid1", 00:12:11.558 "superblock": true, 00:12:11.558 "num_base_bdevs": 4, 00:12:11.558 "num_base_bdevs_discovered": 3, 00:12:11.558 "num_base_bdevs_operational": 3, 00:12:11.558 "base_bdevs_list": [ 00:12:11.558 { 00:12:11.558 "name": null, 00:12:11.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.558 "is_configured": false, 00:12:11.558 "data_offset": 0, 00:12:11.558 "data_size": 63488 00:12:11.558 }, 00:12:11.558 { 00:12:11.558 "name": "BaseBdev2", 00:12:11.558 "uuid": "8b91538f-c32d-55af-896b-ed9c715e9d5c", 00:12:11.558 "is_configured": true, 00:12:11.558 "data_offset": 2048, 00:12:11.558 "data_size": 63488 00:12:11.558 }, 00:12:11.558 { 00:12:11.558 "name": "BaseBdev3", 00:12:11.558 "uuid": "cfc5ac71-32eb-5117-a124-0a6125a5cbe9", 00:12:11.558 "is_configured": true, 00:12:11.558 "data_offset": 2048, 00:12:11.558 "data_size": 63488 00:12:11.558 }, 00:12:11.558 { 00:12:11.558 "name": "BaseBdev4", 00:12:11.558 "uuid": "0643747e-dfa6-50d9-986d-ddaec4d2d77c", 00:12:11.558 "is_configured": true, 00:12:11.558 "data_offset": 2048, 00:12:11.558 "data_size": 63488 00:12:11.558 } 00:12:11.558 ] 00:12:11.558 }' 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.558 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.128 [2024-09-29 21:43:30.888223] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:12.128 [2024-09-29 21:43:30.888259] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.128 [2024-09-29 21:43:30.890869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:12.128 [2024-09-29 21:43:30.890922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.128 [2024-09-29 21:43:30.891035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:12.128 [2024-09-29 21:43:30.891060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:12.128 { 00:12:12.128 "results": [ 00:12:12.128 { 00:12:12.128 "job": "raid_bdev1", 00:12:12.128 "core_mask": "0x1", 00:12:12.128 "workload": "randrw", 00:12:12.128 "percentage": 50, 00:12:12.128 "status": "finished", 00:12:12.128 "queue_depth": 1, 00:12:12.128 "io_size": 131072, 00:12:12.128 "runtime": 1.440634, 00:12:12.128 "iops": 8971.74438476393, 00:12:12.128 "mibps": 1121.4680480954912, 00:12:12.128 "io_failed": 0, 00:12:12.128 "io_timeout": 0, 00:12:12.128 "avg_latency_us": 108.99088912351237, 00:12:12.128 "min_latency_us": 22.46986899563319, 00:12:12.128 "max_latency_us": 1345.0620087336245 00:12:12.128 } 00:12:12.128 ], 00:12:12.128 "core_count": 1 00:12:12.128 } 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75246 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75246 ']' 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75246 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75246 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:12.128 killing process with pid 75246 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75246' 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75246 00:12:12.128 [2024-09-29 21:43:30.924800] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:12.128 21:43:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75246 00:12:12.388 [2024-09-29 21:43:31.266309] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.W7vecwgdz6 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:13.768 ************************************ 00:12:13.768 END TEST raid_write_error_test 00:12:13.768 ************************************ 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:13.768 00:12:13.768 real 0m4.974s 00:12:13.768 user 0m5.652s 00:12:13.768 sys 0m0.748s 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:13.768 21:43:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.768 21:43:32 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:13.768 21:43:32 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:13.768 21:43:32 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:13.768 21:43:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:13.768 21:43:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:13.768 21:43:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.768 ************************************ 00:12:13.768 START TEST raid_rebuild_test 00:12:13.768 ************************************ 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:13.768 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75390 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75390 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75390 ']' 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.029 21:43:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.029 [2024-09-29 21:43:32.841050] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:14.029 [2024-09-29 21:43:32.841250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:14.029 Zero copy mechanism will not be used. 00:12:14.029 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75390 ] 00:12:14.029 [2024-09-29 21:43:33.005780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.288 [2024-09-29 21:43:33.254856] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.548 [2024-09-29 21:43:33.485460] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.548 [2024-09-29 21:43:33.485608] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.808 BaseBdev1_malloc 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.808 [2024-09-29 21:43:33.717216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:14.808 [2024-09-29 21:43:33.717342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.808 [2024-09-29 21:43:33.717385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:14.808 [2024-09-29 21:43:33.717422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.808 [2024-09-29 21:43:33.719842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.808 [2024-09-29 21:43:33.719926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:14.808 BaseBdev1 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.808 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.067 BaseBdev2_malloc 00:12:15.067 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.067 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:15.067 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.067 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.067 [2024-09-29 21:43:33.805445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:15.067 [2024-09-29 21:43:33.805559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.067 [2024-09-29 21:43:33.805596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:15.067 [2024-09-29 21:43:33.805631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.067 [2024-09-29 21:43:33.808002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.067 [2024-09-29 21:43:33.808100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:15.067 BaseBdev2 00:12:15.067 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.067 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:15.067 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.068 spare_malloc 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.068 spare_delay 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.068 [2024-09-29 21:43:33.876751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:15.068 [2024-09-29 21:43:33.876808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.068 [2024-09-29 21:43:33.876827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:15.068 [2024-09-29 21:43:33.876837] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.068 [2024-09-29 21:43:33.879232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.068 [2024-09-29 21:43:33.879322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:15.068 spare 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.068 [2024-09-29 21:43:33.888782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.068 [2024-09-29 21:43:33.890894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.068 [2024-09-29 21:43:33.891043] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:15.068 [2024-09-29 21:43:33.891059] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:15.068 [2024-09-29 21:43:33.891354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:15.068 [2024-09-29 21:43:33.891535] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:15.068 [2024-09-29 21:43:33.891544] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:15.068 [2024-09-29 21:43:33.891694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.068 "name": "raid_bdev1", 00:12:15.068 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:15.068 "strip_size_kb": 0, 00:12:15.068 "state": "online", 00:12:15.068 "raid_level": "raid1", 00:12:15.068 "superblock": false, 00:12:15.068 "num_base_bdevs": 2, 00:12:15.068 "num_base_bdevs_discovered": 2, 00:12:15.068 "num_base_bdevs_operational": 2, 00:12:15.068 "base_bdevs_list": [ 00:12:15.068 { 00:12:15.068 "name": "BaseBdev1", 00:12:15.068 "uuid": "1ec6727d-b8f3-569c-b88f-539d52f38555", 00:12:15.068 "is_configured": true, 00:12:15.068 "data_offset": 0, 00:12:15.068 "data_size": 65536 00:12:15.068 }, 00:12:15.068 { 00:12:15.068 "name": "BaseBdev2", 00:12:15.068 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:15.068 "is_configured": true, 00:12:15.068 "data_offset": 0, 00:12:15.068 "data_size": 65536 00:12:15.068 } 00:12:15.068 ] 00:12:15.068 }' 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.068 21:43:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.327 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:15.327 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.327 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.327 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:15.327 [2024-09-29 21:43:34.280461] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.327 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:15.586 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:15.586 [2024-09-29 21:43:34.555665] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:15.846 /dev/nbd0 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:15.846 1+0 records in 00:12:15.846 1+0 records out 00:12:15.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280942 s, 14.6 MB/s 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:15.846 21:43:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:20.042 65536+0 records in 00:12:20.042 65536+0 records out 00:12:20.042 33554432 bytes (34 MB, 32 MiB) copied, 3.97467 s, 8.4 MB/s 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:20.042 [2024-09-29 21:43:38.815772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.042 [2024-09-29 21:43:38.823848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.042 "name": "raid_bdev1", 00:12:20.042 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:20.042 "strip_size_kb": 0, 00:12:20.042 "state": "online", 00:12:20.042 "raid_level": "raid1", 00:12:20.042 "superblock": false, 00:12:20.042 "num_base_bdevs": 2, 00:12:20.042 "num_base_bdevs_discovered": 1, 00:12:20.042 "num_base_bdevs_operational": 1, 00:12:20.042 "base_bdevs_list": [ 00:12:20.042 { 00:12:20.042 "name": null, 00:12:20.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.042 "is_configured": false, 00:12:20.042 "data_offset": 0, 00:12:20.042 "data_size": 65536 00:12:20.042 }, 00:12:20.042 { 00:12:20.042 "name": "BaseBdev2", 00:12:20.042 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:20.042 "is_configured": true, 00:12:20.042 "data_offset": 0, 00:12:20.042 "data_size": 65536 00:12:20.042 } 00:12:20.042 ] 00:12:20.042 }' 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.042 21:43:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.301 21:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:20.301 21:43:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.301 21:43:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.301 [2024-09-29 21:43:39.263112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.301 [2024-09-29 21:43:39.278946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:20.301 21:43:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.301 21:43:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:20.301 [2024-09-29 21:43:39.281142] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.683 "name": "raid_bdev1", 00:12:21.683 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:21.683 "strip_size_kb": 0, 00:12:21.683 "state": "online", 00:12:21.683 "raid_level": "raid1", 00:12:21.683 "superblock": false, 00:12:21.683 "num_base_bdevs": 2, 00:12:21.683 "num_base_bdevs_discovered": 2, 00:12:21.683 "num_base_bdevs_operational": 2, 00:12:21.683 "process": { 00:12:21.683 "type": "rebuild", 00:12:21.683 "target": "spare", 00:12:21.683 "progress": { 00:12:21.683 "blocks": 20480, 00:12:21.683 "percent": 31 00:12:21.683 } 00:12:21.683 }, 00:12:21.683 "base_bdevs_list": [ 00:12:21.683 { 00:12:21.683 "name": "spare", 00:12:21.683 "uuid": "384a1ba9-105f-5a9f-bf69-dc288a84b73b", 00:12:21.683 "is_configured": true, 00:12:21.683 "data_offset": 0, 00:12:21.683 "data_size": 65536 00:12:21.683 }, 00:12:21.683 { 00:12:21.683 "name": "BaseBdev2", 00:12:21.683 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:21.683 "is_configured": true, 00:12:21.683 "data_offset": 0, 00:12:21.683 "data_size": 65536 00:12:21.683 } 00:12:21.683 ] 00:12:21.683 }' 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.683 [2024-09-29 21:43:40.420289] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.683 [2024-09-29 21:43:40.489786] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:21.683 [2024-09-29 21:43:40.489849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.683 [2024-09-29 21:43:40.489865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.683 [2024-09-29 21:43:40.489875] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.683 "name": "raid_bdev1", 00:12:21.683 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:21.683 "strip_size_kb": 0, 00:12:21.683 "state": "online", 00:12:21.683 "raid_level": "raid1", 00:12:21.683 "superblock": false, 00:12:21.683 "num_base_bdevs": 2, 00:12:21.683 "num_base_bdevs_discovered": 1, 00:12:21.683 "num_base_bdevs_operational": 1, 00:12:21.683 "base_bdevs_list": [ 00:12:21.683 { 00:12:21.683 "name": null, 00:12:21.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.683 "is_configured": false, 00:12:21.683 "data_offset": 0, 00:12:21.683 "data_size": 65536 00:12:21.683 }, 00:12:21.683 { 00:12:21.683 "name": "BaseBdev2", 00:12:21.683 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:21.683 "is_configured": true, 00:12:21.683 "data_offset": 0, 00:12:21.683 "data_size": 65536 00:12:21.683 } 00:12:21.683 ] 00:12:21.683 }' 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.683 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.254 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:22.255 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.255 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:22.255 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:22.255 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.255 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.255 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.255 21:43:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.255 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.255 21:43:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.255 "name": "raid_bdev1", 00:12:22.255 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:22.255 "strip_size_kb": 0, 00:12:22.255 "state": "online", 00:12:22.255 "raid_level": "raid1", 00:12:22.255 "superblock": false, 00:12:22.255 "num_base_bdevs": 2, 00:12:22.255 "num_base_bdevs_discovered": 1, 00:12:22.255 "num_base_bdevs_operational": 1, 00:12:22.255 "base_bdevs_list": [ 00:12:22.255 { 00:12:22.255 "name": null, 00:12:22.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.255 "is_configured": false, 00:12:22.255 "data_offset": 0, 00:12:22.255 "data_size": 65536 00:12:22.255 }, 00:12:22.255 { 00:12:22.255 "name": "BaseBdev2", 00:12:22.255 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:22.255 "is_configured": true, 00:12:22.255 "data_offset": 0, 00:12:22.255 "data_size": 65536 00:12:22.255 } 00:12:22.255 ] 00:12:22.255 }' 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.255 [2024-09-29 21:43:41.122692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.255 [2024-09-29 21:43:41.138271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.255 21:43:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:22.255 [2024-09-29 21:43:41.140358] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.193 21:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.453 "name": "raid_bdev1", 00:12:23.453 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:23.453 "strip_size_kb": 0, 00:12:23.453 "state": "online", 00:12:23.453 "raid_level": "raid1", 00:12:23.453 "superblock": false, 00:12:23.453 "num_base_bdevs": 2, 00:12:23.453 "num_base_bdevs_discovered": 2, 00:12:23.453 "num_base_bdevs_operational": 2, 00:12:23.453 "process": { 00:12:23.453 "type": "rebuild", 00:12:23.453 "target": "spare", 00:12:23.453 "progress": { 00:12:23.453 "blocks": 20480, 00:12:23.453 "percent": 31 00:12:23.453 } 00:12:23.453 }, 00:12:23.453 "base_bdevs_list": [ 00:12:23.453 { 00:12:23.453 "name": "spare", 00:12:23.453 "uuid": "384a1ba9-105f-5a9f-bf69-dc288a84b73b", 00:12:23.453 "is_configured": true, 00:12:23.453 "data_offset": 0, 00:12:23.453 "data_size": 65536 00:12:23.453 }, 00:12:23.453 { 00:12:23.453 "name": "BaseBdev2", 00:12:23.453 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:23.453 "is_configured": true, 00:12:23.453 "data_offset": 0, 00:12:23.453 "data_size": 65536 00:12:23.453 } 00:12:23.453 ] 00:12:23.453 }' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=378 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.453 "name": "raid_bdev1", 00:12:23.453 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:23.453 "strip_size_kb": 0, 00:12:23.453 "state": "online", 00:12:23.453 "raid_level": "raid1", 00:12:23.453 "superblock": false, 00:12:23.453 "num_base_bdevs": 2, 00:12:23.453 "num_base_bdevs_discovered": 2, 00:12:23.453 "num_base_bdevs_operational": 2, 00:12:23.453 "process": { 00:12:23.453 "type": "rebuild", 00:12:23.453 "target": "spare", 00:12:23.453 "progress": { 00:12:23.453 "blocks": 22528, 00:12:23.453 "percent": 34 00:12:23.453 } 00:12:23.453 }, 00:12:23.453 "base_bdevs_list": [ 00:12:23.453 { 00:12:23.453 "name": "spare", 00:12:23.453 "uuid": "384a1ba9-105f-5a9f-bf69-dc288a84b73b", 00:12:23.453 "is_configured": true, 00:12:23.453 "data_offset": 0, 00:12:23.453 "data_size": 65536 00:12:23.453 }, 00:12:23.453 { 00:12:23.453 "name": "BaseBdev2", 00:12:23.453 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:23.453 "is_configured": true, 00:12:23.453 "data_offset": 0, 00:12:23.453 "data_size": 65536 00:12:23.453 } 00:12:23.453 ] 00:12:23.453 }' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.453 21:43:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:24.832 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.833 "name": "raid_bdev1", 00:12:24.833 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:24.833 "strip_size_kb": 0, 00:12:24.833 "state": "online", 00:12:24.833 "raid_level": "raid1", 00:12:24.833 "superblock": false, 00:12:24.833 "num_base_bdevs": 2, 00:12:24.833 "num_base_bdevs_discovered": 2, 00:12:24.833 "num_base_bdevs_operational": 2, 00:12:24.833 "process": { 00:12:24.833 "type": "rebuild", 00:12:24.833 "target": "spare", 00:12:24.833 "progress": { 00:12:24.833 "blocks": 45056, 00:12:24.833 "percent": 68 00:12:24.833 } 00:12:24.833 }, 00:12:24.833 "base_bdevs_list": [ 00:12:24.833 { 00:12:24.833 "name": "spare", 00:12:24.833 "uuid": "384a1ba9-105f-5a9f-bf69-dc288a84b73b", 00:12:24.833 "is_configured": true, 00:12:24.833 "data_offset": 0, 00:12:24.833 "data_size": 65536 00:12:24.833 }, 00:12:24.833 { 00:12:24.833 "name": "BaseBdev2", 00:12:24.833 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:24.833 "is_configured": true, 00:12:24.833 "data_offset": 0, 00:12:24.833 "data_size": 65536 00:12:24.833 } 00:12:24.833 ] 00:12:24.833 }' 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.833 21:43:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:25.402 [2024-09-29 21:43:44.363114] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:25.402 [2024-09-29 21:43:44.363206] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:25.402 [2024-09-29 21:43:44.363254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.661 "name": "raid_bdev1", 00:12:25.661 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:25.661 "strip_size_kb": 0, 00:12:25.661 "state": "online", 00:12:25.661 "raid_level": "raid1", 00:12:25.661 "superblock": false, 00:12:25.661 "num_base_bdevs": 2, 00:12:25.661 "num_base_bdevs_discovered": 2, 00:12:25.661 "num_base_bdevs_operational": 2, 00:12:25.661 "base_bdevs_list": [ 00:12:25.661 { 00:12:25.661 "name": "spare", 00:12:25.661 "uuid": "384a1ba9-105f-5a9f-bf69-dc288a84b73b", 00:12:25.661 "is_configured": true, 00:12:25.661 "data_offset": 0, 00:12:25.661 "data_size": 65536 00:12:25.661 }, 00:12:25.661 { 00:12:25.661 "name": "BaseBdev2", 00:12:25.661 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:25.661 "is_configured": true, 00:12:25.661 "data_offset": 0, 00:12:25.661 "data_size": 65536 00:12:25.661 } 00:12:25.661 ] 00:12:25.661 }' 00:12:25.661 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.921 "name": "raid_bdev1", 00:12:25.921 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:25.921 "strip_size_kb": 0, 00:12:25.921 "state": "online", 00:12:25.921 "raid_level": "raid1", 00:12:25.921 "superblock": false, 00:12:25.921 "num_base_bdevs": 2, 00:12:25.921 "num_base_bdevs_discovered": 2, 00:12:25.921 "num_base_bdevs_operational": 2, 00:12:25.921 "base_bdevs_list": [ 00:12:25.921 { 00:12:25.921 "name": "spare", 00:12:25.921 "uuid": "384a1ba9-105f-5a9f-bf69-dc288a84b73b", 00:12:25.921 "is_configured": true, 00:12:25.921 "data_offset": 0, 00:12:25.921 "data_size": 65536 00:12:25.921 }, 00:12:25.921 { 00:12:25.921 "name": "BaseBdev2", 00:12:25.921 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:25.921 "is_configured": true, 00:12:25.921 "data_offset": 0, 00:12:25.921 "data_size": 65536 00:12:25.921 } 00:12:25.921 ] 00:12:25.921 }' 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.921 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.181 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.181 "name": "raid_bdev1", 00:12:26.181 "uuid": "bf454af7-52d8-48b0-a6d3-a61ed41544d2", 00:12:26.181 "strip_size_kb": 0, 00:12:26.181 "state": "online", 00:12:26.181 "raid_level": "raid1", 00:12:26.181 "superblock": false, 00:12:26.181 "num_base_bdevs": 2, 00:12:26.181 "num_base_bdevs_discovered": 2, 00:12:26.181 "num_base_bdevs_operational": 2, 00:12:26.181 "base_bdevs_list": [ 00:12:26.181 { 00:12:26.181 "name": "spare", 00:12:26.181 "uuid": "384a1ba9-105f-5a9f-bf69-dc288a84b73b", 00:12:26.181 "is_configured": true, 00:12:26.181 "data_offset": 0, 00:12:26.181 "data_size": 65536 00:12:26.181 }, 00:12:26.181 { 00:12:26.181 "name": "BaseBdev2", 00:12:26.181 "uuid": "a31ab410-9141-5ba3-8f83-24b07a78a3e6", 00:12:26.181 "is_configured": true, 00:12:26.181 "data_offset": 0, 00:12:26.181 "data_size": 65536 00:12:26.181 } 00:12:26.181 ] 00:12:26.181 }' 00:12:26.181 21:43:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.181 21:43:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.441 [2024-09-29 21:43:45.295950] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:26.441 [2024-09-29 21:43:45.296040] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:26.441 [2024-09-29 21:43:45.296183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.441 [2024-09-29 21:43:45.296283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.441 [2024-09-29 21:43:45.296329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:26.441 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:26.442 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:26.442 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:26.442 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:26.442 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:26.442 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:26.442 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:26.442 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:26.701 /dev/nbd0 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:26.701 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:26.702 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.702 1+0 records in 00:12:26.702 1+0 records out 00:12:26.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319339 s, 12.8 MB/s 00:12:26.702 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.702 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:26.702 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.702 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:26.702 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:26.702 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.702 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:26.702 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:26.961 /dev/nbd1 00:12:26.961 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:26.961 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:26.961 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:26.961 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:26.962 1+0 records in 00:12:26.962 1+0 records out 00:12:26.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486726 s, 8.4 MB/s 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:26.962 21:43:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:27.222 21:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:27.222 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:27.222 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:27.222 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:27.222 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:27.222 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.222 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75390 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75390 ']' 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75390 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:12:27.482 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:27.483 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75390 00:12:27.742 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:27.742 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:27.742 killing process with pid 75390 00:12:27.742 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75390' 00:12:27.742 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75390 00:12:27.742 Received shutdown signal, test time was about 60.000000 seconds 00:12:27.742 00:12:27.742 Latency(us) 00:12:27.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.742 =================================================================================================================== 00:12:27.742 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:27.742 [2024-09-29 21:43:46.469568] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.742 21:43:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75390 00:12:28.002 [2024-09-29 21:43:46.783276] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:29.384 00:12:29.384 real 0m15.360s 00:12:29.384 user 0m17.010s 00:12:29.384 sys 0m3.143s 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.384 ************************************ 00:12:29.384 END TEST raid_rebuild_test 00:12:29.384 ************************************ 00:12:29.384 21:43:48 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:29.384 21:43:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:29.384 21:43:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.384 21:43:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.384 ************************************ 00:12:29.384 START TEST raid_rebuild_test_sb 00:12:29.384 ************************************ 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75808 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75808 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75808 ']' 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.384 21:43:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.384 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:29.384 Zero copy mechanism will not be used. 00:12:29.384 [2024-09-29 21:43:48.285307] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:29.385 [2024-09-29 21:43:48.285442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75808 ] 00:12:29.644 [2024-09-29 21:43:48.450412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.904 [2024-09-29 21:43:48.707425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.164 [2024-09-29 21:43:48.941039] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.165 [2024-09-29 21:43:48.941085] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.165 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.165 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:30.165 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:30.165 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:30.165 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.165 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.425 BaseBdev1_malloc 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.425 [2024-09-29 21:43:49.186531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:30.425 [2024-09-29 21:43:49.186600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.425 [2024-09-29 21:43:49.186645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:30.425 [2024-09-29 21:43:49.186661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.425 [2024-09-29 21:43:49.189068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.425 [2024-09-29 21:43:49.189106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.425 BaseBdev1 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.425 BaseBdev2_malloc 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.425 [2024-09-29 21:43:49.261665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:30.425 [2024-09-29 21:43:49.261730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.425 [2024-09-29 21:43:49.261752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:30.425 [2024-09-29 21:43:49.261764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.425 [2024-09-29 21:43:49.264211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.425 [2024-09-29 21:43:49.264248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:30.425 BaseBdev2 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.425 spare_malloc 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.425 spare_delay 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.425 [2024-09-29 21:43:49.334772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:30.425 [2024-09-29 21:43:49.334833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.425 [2024-09-29 21:43:49.334869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:30.425 [2024-09-29 21:43:49.334880] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.425 [2024-09-29 21:43:49.337261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.425 [2024-09-29 21:43:49.337302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:30.425 spare 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.425 [2024-09-29 21:43:49.346810] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.425 [2024-09-29 21:43:49.348853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.425 [2024-09-29 21:43:49.349044] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:30.425 [2024-09-29 21:43:49.349068] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.425 [2024-09-29 21:43:49.349326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:30.425 [2024-09-29 21:43:49.349504] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:30.425 [2024-09-29 21:43:49.349518] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:30.425 [2024-09-29 21:43:49.349669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.425 "name": "raid_bdev1", 00:12:30.425 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:30.425 "strip_size_kb": 0, 00:12:30.425 "state": "online", 00:12:30.425 "raid_level": "raid1", 00:12:30.425 "superblock": true, 00:12:30.425 "num_base_bdevs": 2, 00:12:30.425 "num_base_bdevs_discovered": 2, 00:12:30.425 "num_base_bdevs_operational": 2, 00:12:30.425 "base_bdevs_list": [ 00:12:30.425 { 00:12:30.425 "name": "BaseBdev1", 00:12:30.425 "uuid": "548b398b-816c-5ddb-852a-402d21b7e6c7", 00:12:30.425 "is_configured": true, 00:12:30.425 "data_offset": 2048, 00:12:30.425 "data_size": 63488 00:12:30.425 }, 00:12:30.425 { 00:12:30.425 "name": "BaseBdev2", 00:12:30.425 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:30.425 "is_configured": true, 00:12:30.425 "data_offset": 2048, 00:12:30.425 "data_size": 63488 00:12:30.425 } 00:12:30.425 ] 00:12:30.425 }' 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.425 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.994 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.994 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.994 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.994 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:30.994 [2024-09-29 21:43:49.810353] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.995 21:43:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:31.254 [2024-09-29 21:43:50.101581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:31.254 /dev/nbd0 00:12:31.254 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:31.254 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:31.254 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:31.254 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:31.254 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:31.254 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:31.254 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:31.254 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.255 1+0 records in 00:12:31.255 1+0 records out 00:12:31.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367837 s, 11.1 MB/s 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:31.255 21:43:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:35.451 63488+0 records in 00:12:35.451 63488+0 records out 00:12:35.451 32505856 bytes (33 MB, 31 MiB) copied, 3.99959 s, 8.1 MB/s 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.451 [2024-09-29 21:43:54.388128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.451 [2024-09-29 21:43:54.404215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.451 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.710 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.710 "name": "raid_bdev1", 00:12:35.710 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:35.710 "strip_size_kb": 0, 00:12:35.710 "state": "online", 00:12:35.710 "raid_level": "raid1", 00:12:35.710 "superblock": true, 00:12:35.710 "num_base_bdevs": 2, 00:12:35.710 "num_base_bdevs_discovered": 1, 00:12:35.710 "num_base_bdevs_operational": 1, 00:12:35.710 "base_bdevs_list": [ 00:12:35.710 { 00:12:35.710 "name": null, 00:12:35.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.710 "is_configured": false, 00:12:35.710 "data_offset": 0, 00:12:35.710 "data_size": 63488 00:12:35.710 }, 00:12:35.710 { 00:12:35.710 "name": "BaseBdev2", 00:12:35.710 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:35.710 "is_configured": true, 00:12:35.710 "data_offset": 2048, 00:12:35.710 "data_size": 63488 00:12:35.710 } 00:12:35.710 ] 00:12:35.710 }' 00:12:35.710 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.710 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.985 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.985 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.985 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.985 [2024-09-29 21:43:54.859648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.985 [2024-09-29 21:43:54.879499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:35.985 21:43:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.985 21:43:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:35.985 [2024-09-29 21:43:54.881717] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.938 21:43:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.197 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.197 "name": "raid_bdev1", 00:12:37.197 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:37.197 "strip_size_kb": 0, 00:12:37.197 "state": "online", 00:12:37.197 "raid_level": "raid1", 00:12:37.197 "superblock": true, 00:12:37.197 "num_base_bdevs": 2, 00:12:37.197 "num_base_bdevs_discovered": 2, 00:12:37.197 "num_base_bdevs_operational": 2, 00:12:37.197 "process": { 00:12:37.197 "type": "rebuild", 00:12:37.197 "target": "spare", 00:12:37.197 "progress": { 00:12:37.197 "blocks": 20480, 00:12:37.197 "percent": 32 00:12:37.197 } 00:12:37.197 }, 00:12:37.197 "base_bdevs_list": [ 00:12:37.197 { 00:12:37.197 "name": "spare", 00:12:37.197 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:37.197 "is_configured": true, 00:12:37.197 "data_offset": 2048, 00:12:37.197 "data_size": 63488 00:12:37.197 }, 00:12:37.197 { 00:12:37.197 "name": "BaseBdev2", 00:12:37.197 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:37.197 "is_configured": true, 00:12:37.198 "data_offset": 2048, 00:12:37.198 "data_size": 63488 00:12:37.198 } 00:12:37.198 ] 00:12:37.198 }' 00:12:37.198 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.198 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.198 21:43:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.198 [2024-09-29 21:43:56.048380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.198 [2024-09-29 21:43:56.087279] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:37.198 [2024-09-29 21:43:56.087350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.198 [2024-09-29 21:43:56.087368] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.198 [2024-09-29 21:43:56.087380] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.198 "name": "raid_bdev1", 00:12:37.198 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:37.198 "strip_size_kb": 0, 00:12:37.198 "state": "online", 00:12:37.198 "raid_level": "raid1", 00:12:37.198 "superblock": true, 00:12:37.198 "num_base_bdevs": 2, 00:12:37.198 "num_base_bdevs_discovered": 1, 00:12:37.198 "num_base_bdevs_operational": 1, 00:12:37.198 "base_bdevs_list": [ 00:12:37.198 { 00:12:37.198 "name": null, 00:12:37.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.198 "is_configured": false, 00:12:37.198 "data_offset": 0, 00:12:37.198 "data_size": 63488 00:12:37.198 }, 00:12:37.198 { 00:12:37.198 "name": "BaseBdev2", 00:12:37.198 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:37.198 "is_configured": true, 00:12:37.198 "data_offset": 2048, 00:12:37.198 "data_size": 63488 00:12:37.198 } 00:12:37.198 ] 00:12:37.198 }' 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.198 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.766 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.766 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.766 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.766 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.767 "name": "raid_bdev1", 00:12:37.767 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:37.767 "strip_size_kb": 0, 00:12:37.767 "state": "online", 00:12:37.767 "raid_level": "raid1", 00:12:37.767 "superblock": true, 00:12:37.767 "num_base_bdevs": 2, 00:12:37.767 "num_base_bdevs_discovered": 1, 00:12:37.767 "num_base_bdevs_operational": 1, 00:12:37.767 "base_bdevs_list": [ 00:12:37.767 { 00:12:37.767 "name": null, 00:12:37.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.767 "is_configured": false, 00:12:37.767 "data_offset": 0, 00:12:37.767 "data_size": 63488 00:12:37.767 }, 00:12:37.767 { 00:12:37.767 "name": "BaseBdev2", 00:12:37.767 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:37.767 "is_configured": true, 00:12:37.767 "data_offset": 2048, 00:12:37.767 "data_size": 63488 00:12:37.767 } 00:12:37.767 ] 00:12:37.767 }' 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.767 [2024-09-29 21:43:56.729734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.767 [2024-09-29 21:43:56.747895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.767 21:43:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:37.767 [2024-09-29 21:43:56.750073] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.143 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.143 "name": "raid_bdev1", 00:12:39.144 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:39.144 "strip_size_kb": 0, 00:12:39.144 "state": "online", 00:12:39.144 "raid_level": "raid1", 00:12:39.144 "superblock": true, 00:12:39.144 "num_base_bdevs": 2, 00:12:39.144 "num_base_bdevs_discovered": 2, 00:12:39.144 "num_base_bdevs_operational": 2, 00:12:39.144 "process": { 00:12:39.144 "type": "rebuild", 00:12:39.144 "target": "spare", 00:12:39.144 "progress": { 00:12:39.144 "blocks": 20480, 00:12:39.144 "percent": 32 00:12:39.144 } 00:12:39.144 }, 00:12:39.144 "base_bdevs_list": [ 00:12:39.144 { 00:12:39.144 "name": "spare", 00:12:39.144 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:39.144 "is_configured": true, 00:12:39.144 "data_offset": 2048, 00:12:39.144 "data_size": 63488 00:12:39.144 }, 00:12:39.144 { 00:12:39.144 "name": "BaseBdev2", 00:12:39.144 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:39.144 "is_configured": true, 00:12:39.144 "data_offset": 2048, 00:12:39.144 "data_size": 63488 00:12:39.144 } 00:12:39.144 ] 00:12:39.144 }' 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:39.144 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=393 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.144 "name": "raid_bdev1", 00:12:39.144 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:39.144 "strip_size_kb": 0, 00:12:39.144 "state": "online", 00:12:39.144 "raid_level": "raid1", 00:12:39.144 "superblock": true, 00:12:39.144 "num_base_bdevs": 2, 00:12:39.144 "num_base_bdevs_discovered": 2, 00:12:39.144 "num_base_bdevs_operational": 2, 00:12:39.144 "process": { 00:12:39.144 "type": "rebuild", 00:12:39.144 "target": "spare", 00:12:39.144 "progress": { 00:12:39.144 "blocks": 22528, 00:12:39.144 "percent": 35 00:12:39.144 } 00:12:39.144 }, 00:12:39.144 "base_bdevs_list": [ 00:12:39.144 { 00:12:39.144 "name": "spare", 00:12:39.144 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:39.144 "is_configured": true, 00:12:39.144 "data_offset": 2048, 00:12:39.144 "data_size": 63488 00:12:39.144 }, 00:12:39.144 { 00:12:39.144 "name": "BaseBdev2", 00:12:39.144 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:39.144 "is_configured": true, 00:12:39.144 "data_offset": 2048, 00:12:39.144 "data_size": 63488 00:12:39.144 } 00:12:39.144 ] 00:12:39.144 }' 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.144 21:43:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.144 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.144 21:43:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.082 21:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.342 21:43:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.342 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.342 "name": "raid_bdev1", 00:12:40.342 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:40.342 "strip_size_kb": 0, 00:12:40.342 "state": "online", 00:12:40.342 "raid_level": "raid1", 00:12:40.342 "superblock": true, 00:12:40.342 "num_base_bdevs": 2, 00:12:40.342 "num_base_bdevs_discovered": 2, 00:12:40.342 "num_base_bdevs_operational": 2, 00:12:40.342 "process": { 00:12:40.342 "type": "rebuild", 00:12:40.342 "target": "spare", 00:12:40.342 "progress": { 00:12:40.342 "blocks": 45056, 00:12:40.342 "percent": 70 00:12:40.342 } 00:12:40.342 }, 00:12:40.342 "base_bdevs_list": [ 00:12:40.342 { 00:12:40.342 "name": "spare", 00:12:40.342 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:40.342 "is_configured": true, 00:12:40.342 "data_offset": 2048, 00:12:40.342 "data_size": 63488 00:12:40.342 }, 00:12:40.342 { 00:12:40.342 "name": "BaseBdev2", 00:12:40.342 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:40.342 "is_configured": true, 00:12:40.342 "data_offset": 2048, 00:12:40.342 "data_size": 63488 00:12:40.342 } 00:12:40.342 ] 00:12:40.342 }' 00:12:40.342 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.342 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.342 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.342 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.342 21:43:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.912 [2024-09-29 21:43:59.863583] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:40.912 [2024-09-29 21:43:59.863675] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:40.912 [2024-09-29 21:43:59.863780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.482 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.482 "name": "raid_bdev1", 00:12:41.483 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:41.483 "strip_size_kb": 0, 00:12:41.483 "state": "online", 00:12:41.483 "raid_level": "raid1", 00:12:41.483 "superblock": true, 00:12:41.483 "num_base_bdevs": 2, 00:12:41.483 "num_base_bdevs_discovered": 2, 00:12:41.483 "num_base_bdevs_operational": 2, 00:12:41.483 "base_bdevs_list": [ 00:12:41.483 { 00:12:41.483 "name": "spare", 00:12:41.483 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:41.483 "is_configured": true, 00:12:41.483 "data_offset": 2048, 00:12:41.483 "data_size": 63488 00:12:41.483 }, 00:12:41.483 { 00:12:41.483 "name": "BaseBdev2", 00:12:41.483 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:41.483 "is_configured": true, 00:12:41.483 "data_offset": 2048, 00:12:41.483 "data_size": 63488 00:12:41.483 } 00:12:41.483 ] 00:12:41.483 }' 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.483 "name": "raid_bdev1", 00:12:41.483 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:41.483 "strip_size_kb": 0, 00:12:41.483 "state": "online", 00:12:41.483 "raid_level": "raid1", 00:12:41.483 "superblock": true, 00:12:41.483 "num_base_bdevs": 2, 00:12:41.483 "num_base_bdevs_discovered": 2, 00:12:41.483 "num_base_bdevs_operational": 2, 00:12:41.483 "base_bdevs_list": [ 00:12:41.483 { 00:12:41.483 "name": "spare", 00:12:41.483 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:41.483 "is_configured": true, 00:12:41.483 "data_offset": 2048, 00:12:41.483 "data_size": 63488 00:12:41.483 }, 00:12:41.483 { 00:12:41.483 "name": "BaseBdev2", 00:12:41.483 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:41.483 "is_configured": true, 00:12:41.483 "data_offset": 2048, 00:12:41.483 "data_size": 63488 00:12:41.483 } 00:12:41.483 ] 00:12:41.483 }' 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.483 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.743 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.743 "name": "raid_bdev1", 00:12:41.743 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:41.743 "strip_size_kb": 0, 00:12:41.743 "state": "online", 00:12:41.743 "raid_level": "raid1", 00:12:41.743 "superblock": true, 00:12:41.743 "num_base_bdevs": 2, 00:12:41.743 "num_base_bdevs_discovered": 2, 00:12:41.743 "num_base_bdevs_operational": 2, 00:12:41.743 "base_bdevs_list": [ 00:12:41.743 { 00:12:41.743 "name": "spare", 00:12:41.743 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:41.743 "is_configured": true, 00:12:41.743 "data_offset": 2048, 00:12:41.743 "data_size": 63488 00:12:41.743 }, 00:12:41.743 { 00:12:41.743 "name": "BaseBdev2", 00:12:41.743 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:41.743 "is_configured": true, 00:12:41.743 "data_offset": 2048, 00:12:41.743 "data_size": 63488 00:12:41.743 } 00:12:41.743 ] 00:12:41.743 }' 00:12:41.743 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.743 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.003 [2024-09-29 21:44:00.897142] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:42.003 [2024-09-29 21:44:00.897174] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.003 [2024-09-29 21:44:00.897252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.003 [2024-09-29 21:44:00.897316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.003 [2024-09-29 21:44:00.897326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.003 21:44:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:42.263 /dev/nbd0 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.263 1+0 records in 00:12:42.263 1+0 records out 00:12:42.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409975 s, 10.0 MB/s 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.263 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:42.522 /dev/nbd1 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.522 1+0 records in 00:12:42.522 1+0 records out 00:12:42.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000399411 s, 10.3 MB/s 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.522 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:42.782 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:42.782 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.782 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.782 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.782 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:42.782 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.782 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.042 21:44:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:43.042 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:43.301 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.301 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.301 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.301 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:43.301 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.301 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.301 [2024-09-29 21:44:02.041920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:43.301 [2024-09-29 21:44:02.041978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.301 [2024-09-29 21:44:02.042002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:43.301 [2024-09-29 21:44:02.042011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.302 [2024-09-29 21:44:02.044087] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.302 [2024-09-29 21:44:02.044124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:43.302 [2024-09-29 21:44:02.044213] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:43.302 [2024-09-29 21:44:02.044267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.302 [2024-09-29 21:44:02.044405] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.302 spare 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.302 [2024-09-29 21:44:02.144306] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:43.302 [2024-09-29 21:44:02.144338] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:43.302 [2024-09-29 21:44:02.144592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:43.302 [2024-09-29 21:44:02.144747] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:43.302 [2024-09-29 21:44:02.144764] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:43.302 [2024-09-29 21:44:02.144919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.302 "name": "raid_bdev1", 00:12:43.302 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:43.302 "strip_size_kb": 0, 00:12:43.302 "state": "online", 00:12:43.302 "raid_level": "raid1", 00:12:43.302 "superblock": true, 00:12:43.302 "num_base_bdevs": 2, 00:12:43.302 "num_base_bdevs_discovered": 2, 00:12:43.302 "num_base_bdevs_operational": 2, 00:12:43.302 "base_bdevs_list": [ 00:12:43.302 { 00:12:43.302 "name": "spare", 00:12:43.302 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:43.302 "is_configured": true, 00:12:43.302 "data_offset": 2048, 00:12:43.302 "data_size": 63488 00:12:43.302 }, 00:12:43.302 { 00:12:43.302 "name": "BaseBdev2", 00:12:43.302 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:43.302 "is_configured": true, 00:12:43.302 "data_offset": 2048, 00:12:43.302 "data_size": 63488 00:12:43.302 } 00:12:43.302 ] 00:12:43.302 }' 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.302 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.871 "name": "raid_bdev1", 00:12:43.871 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:43.871 "strip_size_kb": 0, 00:12:43.871 "state": "online", 00:12:43.871 "raid_level": "raid1", 00:12:43.871 "superblock": true, 00:12:43.871 "num_base_bdevs": 2, 00:12:43.871 "num_base_bdevs_discovered": 2, 00:12:43.871 "num_base_bdevs_operational": 2, 00:12:43.871 "base_bdevs_list": [ 00:12:43.871 { 00:12:43.871 "name": "spare", 00:12:43.871 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:43.871 "is_configured": true, 00:12:43.871 "data_offset": 2048, 00:12:43.871 "data_size": 63488 00:12:43.871 }, 00:12:43.871 { 00:12:43.871 "name": "BaseBdev2", 00:12:43.871 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:43.871 "is_configured": true, 00:12:43.871 "data_offset": 2048, 00:12:43.871 "data_size": 63488 00:12:43.871 } 00:12:43.871 ] 00:12:43.871 }' 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.871 [2024-09-29 21:44:02.788759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.871 "name": "raid_bdev1", 00:12:43.871 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:43.871 "strip_size_kb": 0, 00:12:43.871 "state": "online", 00:12:43.871 "raid_level": "raid1", 00:12:43.871 "superblock": true, 00:12:43.871 "num_base_bdevs": 2, 00:12:43.871 "num_base_bdevs_discovered": 1, 00:12:43.871 "num_base_bdevs_operational": 1, 00:12:43.871 "base_bdevs_list": [ 00:12:43.871 { 00:12:43.871 "name": null, 00:12:43.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.871 "is_configured": false, 00:12:43.871 "data_offset": 0, 00:12:43.871 "data_size": 63488 00:12:43.871 }, 00:12:43.871 { 00:12:43.871 "name": "BaseBdev2", 00:12:43.871 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:43.871 "is_configured": true, 00:12:43.871 "data_offset": 2048, 00:12:43.871 "data_size": 63488 00:12:43.871 } 00:12:43.871 ] 00:12:43.871 }' 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.871 21:44:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.441 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.441 21:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.441 21:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.441 [2024-09-29 21:44:03.244204] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.442 [2024-09-29 21:44:03.244405] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:44.442 [2024-09-29 21:44:03.244422] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:44.442 [2024-09-29 21:44:03.244461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.442 [2024-09-29 21:44:03.260278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:44.442 21:44:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.442 21:44:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:44.442 [2024-09-29 21:44:03.262136] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.383 "name": "raid_bdev1", 00:12:45.383 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:45.383 "strip_size_kb": 0, 00:12:45.383 "state": "online", 00:12:45.383 "raid_level": "raid1", 00:12:45.383 "superblock": true, 00:12:45.383 "num_base_bdevs": 2, 00:12:45.383 "num_base_bdevs_discovered": 2, 00:12:45.383 "num_base_bdevs_operational": 2, 00:12:45.383 "process": { 00:12:45.383 "type": "rebuild", 00:12:45.383 "target": "spare", 00:12:45.383 "progress": { 00:12:45.383 "blocks": 20480, 00:12:45.383 "percent": 32 00:12:45.383 } 00:12:45.383 }, 00:12:45.383 "base_bdevs_list": [ 00:12:45.383 { 00:12:45.383 "name": "spare", 00:12:45.383 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:45.383 "is_configured": true, 00:12:45.383 "data_offset": 2048, 00:12:45.383 "data_size": 63488 00:12:45.383 }, 00:12:45.383 { 00:12:45.383 "name": "BaseBdev2", 00:12:45.383 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:45.383 "is_configured": true, 00:12:45.383 "data_offset": 2048, 00:12:45.383 "data_size": 63488 00:12:45.383 } 00:12:45.383 ] 00:12:45.383 }' 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.383 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.643 [2024-09-29 21:44:04.409619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.643 [2024-09-29 21:44:04.467082] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:45.643 [2024-09-29 21:44:04.467143] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.643 [2024-09-29 21:44:04.467158] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.643 [2024-09-29 21:44:04.467167] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.643 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.643 "name": "raid_bdev1", 00:12:45.643 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:45.643 "strip_size_kb": 0, 00:12:45.643 "state": "online", 00:12:45.643 "raid_level": "raid1", 00:12:45.644 "superblock": true, 00:12:45.644 "num_base_bdevs": 2, 00:12:45.644 "num_base_bdevs_discovered": 1, 00:12:45.644 "num_base_bdevs_operational": 1, 00:12:45.644 "base_bdevs_list": [ 00:12:45.644 { 00:12:45.644 "name": null, 00:12:45.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.644 "is_configured": false, 00:12:45.644 "data_offset": 0, 00:12:45.644 "data_size": 63488 00:12:45.644 }, 00:12:45.644 { 00:12:45.644 "name": "BaseBdev2", 00:12:45.644 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:45.644 "is_configured": true, 00:12:45.644 "data_offset": 2048, 00:12:45.644 "data_size": 63488 00:12:45.644 } 00:12:45.644 ] 00:12:45.644 }' 00:12:45.644 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.644 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.215 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:46.215 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.215 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.215 [2024-09-29 21:44:04.919196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:46.215 [2024-09-29 21:44:04.919268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.215 [2024-09-29 21:44:04.919293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:46.215 [2024-09-29 21:44:04.919304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.215 [2024-09-29 21:44:04.919794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.215 [2024-09-29 21:44:04.919825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:46.215 [2024-09-29 21:44:04.919917] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:46.215 [2024-09-29 21:44:04.919935] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:46.215 [2024-09-29 21:44:04.919945] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:46.215 [2024-09-29 21:44:04.919968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.215 [2024-09-29 21:44:04.934670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:46.215 spare 00:12:46.215 21:44:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.215 21:44:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:46.215 [2024-09-29 21:44:04.936506] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.157 "name": "raid_bdev1", 00:12:47.157 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:47.157 "strip_size_kb": 0, 00:12:47.157 "state": "online", 00:12:47.157 "raid_level": "raid1", 00:12:47.157 "superblock": true, 00:12:47.157 "num_base_bdevs": 2, 00:12:47.157 "num_base_bdevs_discovered": 2, 00:12:47.157 "num_base_bdevs_operational": 2, 00:12:47.157 "process": { 00:12:47.157 "type": "rebuild", 00:12:47.157 "target": "spare", 00:12:47.157 "progress": { 00:12:47.157 "blocks": 20480, 00:12:47.157 "percent": 32 00:12:47.157 } 00:12:47.157 }, 00:12:47.157 "base_bdevs_list": [ 00:12:47.157 { 00:12:47.157 "name": "spare", 00:12:47.157 "uuid": "baac6b56-3635-56a9-be7c-ee2751ec0e7b", 00:12:47.157 "is_configured": true, 00:12:47.157 "data_offset": 2048, 00:12:47.157 "data_size": 63488 00:12:47.157 }, 00:12:47.157 { 00:12:47.157 "name": "BaseBdev2", 00:12:47.157 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:47.157 "is_configured": true, 00:12:47.157 "data_offset": 2048, 00:12:47.157 "data_size": 63488 00:12:47.157 } 00:12:47.157 ] 00:12:47.157 }' 00:12:47.157 21:44:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.157 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.157 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.157 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.157 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:47.157 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.157 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.157 [2024-09-29 21:44:06.076365] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.417 [2024-09-29 21:44:06.141779] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:47.417 [2024-09-29 21:44:06.141848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.417 [2024-09-29 21:44:06.141865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:47.417 [2024-09-29 21:44:06.141872] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.417 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.418 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.418 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.418 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.418 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.418 "name": "raid_bdev1", 00:12:47.418 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:47.418 "strip_size_kb": 0, 00:12:47.418 "state": "online", 00:12:47.418 "raid_level": "raid1", 00:12:47.418 "superblock": true, 00:12:47.418 "num_base_bdevs": 2, 00:12:47.418 "num_base_bdevs_discovered": 1, 00:12:47.418 "num_base_bdevs_operational": 1, 00:12:47.418 "base_bdevs_list": [ 00:12:47.418 { 00:12:47.418 "name": null, 00:12:47.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.418 "is_configured": false, 00:12:47.418 "data_offset": 0, 00:12:47.418 "data_size": 63488 00:12:47.418 }, 00:12:47.418 { 00:12:47.418 "name": "BaseBdev2", 00:12:47.418 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:47.418 "is_configured": true, 00:12:47.418 "data_offset": 2048, 00:12:47.418 "data_size": 63488 00:12:47.418 } 00:12:47.418 ] 00:12:47.418 }' 00:12:47.418 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.418 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.678 "name": "raid_bdev1", 00:12:47.678 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:47.678 "strip_size_kb": 0, 00:12:47.678 "state": "online", 00:12:47.678 "raid_level": "raid1", 00:12:47.678 "superblock": true, 00:12:47.678 "num_base_bdevs": 2, 00:12:47.678 "num_base_bdevs_discovered": 1, 00:12:47.678 "num_base_bdevs_operational": 1, 00:12:47.678 "base_bdevs_list": [ 00:12:47.678 { 00:12:47.678 "name": null, 00:12:47.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.678 "is_configured": false, 00:12:47.678 "data_offset": 0, 00:12:47.678 "data_size": 63488 00:12:47.678 }, 00:12:47.678 { 00:12:47.678 "name": "BaseBdev2", 00:12:47.678 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:47.678 "is_configured": true, 00:12:47.678 "data_offset": 2048, 00:12:47.678 "data_size": 63488 00:12:47.678 } 00:12:47.678 ] 00:12:47.678 }' 00:12:47.678 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.938 [2024-09-29 21:44:06.738385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:47.938 [2024-09-29 21:44:06.738448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.938 [2024-09-29 21:44:06.738486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:47.938 [2024-09-29 21:44:06.738499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.938 [2024-09-29 21:44:06.739352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.938 [2024-09-29 21:44:06.739381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.938 [2024-09-29 21:44:06.739529] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:47.938 [2024-09-29 21:44:06.739554] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:47.938 [2024-09-29 21:44:06.739567] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:47.938 [2024-09-29 21:44:06.739581] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:47.938 BaseBdev1 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.938 21:44:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.878 "name": "raid_bdev1", 00:12:48.878 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:48.878 "strip_size_kb": 0, 00:12:48.878 "state": "online", 00:12:48.878 "raid_level": "raid1", 00:12:48.878 "superblock": true, 00:12:48.878 "num_base_bdevs": 2, 00:12:48.878 "num_base_bdevs_discovered": 1, 00:12:48.878 "num_base_bdevs_operational": 1, 00:12:48.878 "base_bdevs_list": [ 00:12:48.878 { 00:12:48.878 "name": null, 00:12:48.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.878 "is_configured": false, 00:12:48.878 "data_offset": 0, 00:12:48.878 "data_size": 63488 00:12:48.878 }, 00:12:48.878 { 00:12:48.878 "name": "BaseBdev2", 00:12:48.878 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:48.878 "is_configured": true, 00:12:48.878 "data_offset": 2048, 00:12:48.878 "data_size": 63488 00:12:48.878 } 00:12:48.878 ] 00:12:48.878 }' 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.878 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.447 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.448 "name": "raid_bdev1", 00:12:49.448 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:49.448 "strip_size_kb": 0, 00:12:49.448 "state": "online", 00:12:49.448 "raid_level": "raid1", 00:12:49.448 "superblock": true, 00:12:49.448 "num_base_bdevs": 2, 00:12:49.448 "num_base_bdevs_discovered": 1, 00:12:49.448 "num_base_bdevs_operational": 1, 00:12:49.448 "base_bdevs_list": [ 00:12:49.448 { 00:12:49.448 "name": null, 00:12:49.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.448 "is_configured": false, 00:12:49.448 "data_offset": 0, 00:12:49.448 "data_size": 63488 00:12:49.448 }, 00:12:49.448 { 00:12:49.448 "name": "BaseBdev2", 00:12:49.448 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:49.448 "is_configured": true, 00:12:49.448 "data_offset": 2048, 00:12:49.448 "data_size": 63488 00:12:49.448 } 00:12:49.448 ] 00:12:49.448 }' 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.448 [2024-09-29 21:44:08.291867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.448 [2024-09-29 21:44:08.292054] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:49.448 [2024-09-29 21:44:08.292070] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:49.448 request: 00:12:49.448 { 00:12:49.448 "base_bdev": "BaseBdev1", 00:12:49.448 "raid_bdev": "raid_bdev1", 00:12:49.448 "method": "bdev_raid_add_base_bdev", 00:12:49.448 "req_id": 1 00:12:49.448 } 00:12:49.448 Got JSON-RPC error response 00:12:49.448 response: 00:12:49.448 { 00:12:49.448 "code": -22, 00:12:49.448 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:49.448 } 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:49.448 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.404 "name": "raid_bdev1", 00:12:50.404 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:50.404 "strip_size_kb": 0, 00:12:50.404 "state": "online", 00:12:50.404 "raid_level": "raid1", 00:12:50.404 "superblock": true, 00:12:50.404 "num_base_bdevs": 2, 00:12:50.404 "num_base_bdevs_discovered": 1, 00:12:50.404 "num_base_bdevs_operational": 1, 00:12:50.404 "base_bdevs_list": [ 00:12:50.404 { 00:12:50.404 "name": null, 00:12:50.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.404 "is_configured": false, 00:12:50.404 "data_offset": 0, 00:12:50.404 "data_size": 63488 00:12:50.404 }, 00:12:50.404 { 00:12:50.404 "name": "BaseBdev2", 00:12:50.404 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:50.404 "is_configured": true, 00:12:50.404 "data_offset": 2048, 00:12:50.404 "data_size": 63488 00:12:50.404 } 00:12:50.404 ] 00:12:50.404 }' 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.404 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.975 "name": "raid_bdev1", 00:12:50.975 "uuid": "93513714-778d-40ac-8134-1c400ebde47b", 00:12:50.975 "strip_size_kb": 0, 00:12:50.975 "state": "online", 00:12:50.975 "raid_level": "raid1", 00:12:50.975 "superblock": true, 00:12:50.975 "num_base_bdevs": 2, 00:12:50.975 "num_base_bdevs_discovered": 1, 00:12:50.975 "num_base_bdevs_operational": 1, 00:12:50.975 "base_bdevs_list": [ 00:12:50.975 { 00:12:50.975 "name": null, 00:12:50.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.975 "is_configured": false, 00:12:50.975 "data_offset": 0, 00:12:50.975 "data_size": 63488 00:12:50.975 }, 00:12:50.975 { 00:12:50.975 "name": "BaseBdev2", 00:12:50.975 "uuid": "39ce0ecf-6f2e-5748-b3cc-515370e4fc4a", 00:12:50.975 "is_configured": true, 00:12:50.975 "data_offset": 2048, 00:12:50.975 "data_size": 63488 00:12:50.975 } 00:12:50.975 ] 00:12:50.975 }' 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75808 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75808 ']' 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75808 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75808 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.975 killing process with pid 75808 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75808' 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75808 00:12:50.975 Received shutdown signal, test time was about 60.000000 seconds 00:12:50.975 00:12:50.975 Latency(us) 00:12:50.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.975 =================================================================================================================== 00:12:50.975 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:50.975 [2024-09-29 21:44:09.907315] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.975 [2024-09-29 21:44:09.907466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.975 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75808 00:12:50.975 [2024-09-29 21:44:09.907524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.975 [2024-09-29 21:44:09.907538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:51.234 [2024-09-29 21:44:10.199109] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:52.617 00:12:52.617 real 0m23.214s 00:12:52.617 user 0m27.920s 00:12:52.617 sys 0m3.858s 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 ************************************ 00:12:52.617 END TEST raid_rebuild_test_sb 00:12:52.617 ************************************ 00:12:52.617 21:44:11 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:52.617 21:44:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:52.617 21:44:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.617 21:44:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.617 ************************************ 00:12:52.617 START TEST raid_rebuild_test_io 00:12:52.617 ************************************ 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:52.617 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76538 00:12:52.618 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:52.618 21:44:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76538 00:12:52.618 21:44:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76538 ']' 00:12:52.618 21:44:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.618 21:44:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:52.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.618 21:44:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.618 21:44:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:52.618 21:44:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.618 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:52.618 Zero copy mechanism will not be used. 00:12:52.618 [2024-09-29 21:44:11.568903] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:52.618 [2024-09-29 21:44:11.569053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76538 ] 00:12:52.877 [2024-09-29 21:44:11.738600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.137 [2024-09-29 21:44:11.936480] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.137 [2024-09-29 21:44:12.120540] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.137 [2024-09-29 21:44:12.120582] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.398 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:53.398 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:53.398 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.398 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:53.398 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.398 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 BaseBdev1_malloc 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 [2024-09-29 21:44:12.428316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:53.659 [2024-09-29 21:44:12.428391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.659 [2024-09-29 21:44:12.428415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:53.659 [2024-09-29 21:44:12.428429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.659 [2024-09-29 21:44:12.430448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.659 [2024-09-29 21:44:12.430486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:53.659 BaseBdev1 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 BaseBdev2_malloc 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 [2024-09-29 21:44:12.514544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:53.659 [2024-09-29 21:44:12.514605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.659 [2024-09-29 21:44:12.514624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:53.659 [2024-09-29 21:44:12.514635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.659 [2024-09-29 21:44:12.516545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.659 [2024-09-29 21:44:12.516586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:53.659 BaseBdev2 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 spare_malloc 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 spare_delay 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 [2024-09-29 21:44:12.579424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:53.659 [2024-09-29 21:44:12.579483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.659 [2024-09-29 21:44:12.579501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:53.659 [2024-09-29 21:44:12.579511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.659 [2024-09-29 21:44:12.581461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.659 [2024-09-29 21:44:12.581503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:53.659 spare 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 [2024-09-29 21:44:12.591444] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.659 [2024-09-29 21:44:12.593074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.659 [2024-09-29 21:44:12.593162] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:53.659 [2024-09-29 21:44:12.593173] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:53.659 [2024-09-29 21:44:12.593406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:53.659 [2024-09-29 21:44:12.593581] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:53.659 [2024-09-29 21:44:12.593596] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:53.659 [2024-09-29 21:44:12.593728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.659 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.920 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.920 "name": "raid_bdev1", 00:12:53.920 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:12:53.920 "strip_size_kb": 0, 00:12:53.920 "state": "online", 00:12:53.920 "raid_level": "raid1", 00:12:53.920 "superblock": false, 00:12:53.920 "num_base_bdevs": 2, 00:12:53.920 "num_base_bdevs_discovered": 2, 00:12:53.920 "num_base_bdevs_operational": 2, 00:12:53.920 "base_bdevs_list": [ 00:12:53.920 { 00:12:53.920 "name": "BaseBdev1", 00:12:53.920 "uuid": "752eb0f3-1d1a-534a-be37-8c030e9b39f0", 00:12:53.920 "is_configured": true, 00:12:53.920 "data_offset": 0, 00:12:53.920 "data_size": 65536 00:12:53.920 }, 00:12:53.920 { 00:12:53.920 "name": "BaseBdev2", 00:12:53.920 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:12:53.920 "is_configured": true, 00:12:53.920 "data_offset": 0, 00:12:53.920 "data_size": 65536 00:12:53.920 } 00:12:53.920 ] 00:12:53.920 }' 00:12:53.920 21:44:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.920 21:44:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:54.180 [2024-09-29 21:44:13.106823] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.180 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.440 [2024-09-29 21:44:13.202390] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.440 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.440 "name": "raid_bdev1", 00:12:54.441 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:12:54.441 "strip_size_kb": 0, 00:12:54.441 "state": "online", 00:12:54.441 "raid_level": "raid1", 00:12:54.441 "superblock": false, 00:12:54.441 "num_base_bdevs": 2, 00:12:54.441 "num_base_bdevs_discovered": 1, 00:12:54.441 "num_base_bdevs_operational": 1, 00:12:54.441 "base_bdevs_list": [ 00:12:54.441 { 00:12:54.441 "name": null, 00:12:54.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.441 "is_configured": false, 00:12:54.441 "data_offset": 0, 00:12:54.441 "data_size": 65536 00:12:54.441 }, 00:12:54.441 { 00:12:54.441 "name": "BaseBdev2", 00:12:54.441 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:12:54.441 "is_configured": true, 00:12:54.441 "data_offset": 0, 00:12:54.441 "data_size": 65536 00:12:54.441 } 00:12:54.441 ] 00:12:54.441 }' 00:12:54.441 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.441 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.441 [2024-09-29 21:44:13.285916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:54.441 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:54.441 Zero copy mechanism will not be used. 00:12:54.441 Running I/O for 60 seconds... 00:12:54.701 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:54.701 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.701 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.701 [2024-09-29 21:44:13.629084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:54.701 21:44:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.701 21:44:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:54.961 [2024-09-29 21:44:13.690343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:54.961 [2024-09-29 21:44:13.692130] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.961 [2024-09-29 21:44:13.804510] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:54.961 [2024-09-29 21:44:13.805094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:55.221 [2024-09-29 21:44:14.020279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.221 [2024-09-29 21:44:14.020562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.481 222.00 IOPS, 666.00 MiB/s [2024-09-29 21:44:14.380646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:55.481 [2024-09-29 21:44:14.380992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:55.740 [2024-09-29 21:44:14.587079] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:55.740 [2024-09-29 21:44:14.587290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.740 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.740 "name": "raid_bdev1", 00:12:55.740 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:12:55.740 "strip_size_kb": 0, 00:12:55.740 "state": "online", 00:12:55.740 "raid_level": "raid1", 00:12:55.740 "superblock": false, 00:12:55.740 "num_base_bdevs": 2, 00:12:55.740 "num_base_bdevs_discovered": 2, 00:12:55.740 "num_base_bdevs_operational": 2, 00:12:55.740 "process": { 00:12:55.740 "type": "rebuild", 00:12:55.740 "target": "spare", 00:12:55.740 "progress": { 00:12:55.740 "blocks": 10240, 00:12:55.740 "percent": 15 00:12:55.741 } 00:12:55.741 }, 00:12:55.741 "base_bdevs_list": [ 00:12:55.741 { 00:12:55.741 "name": "spare", 00:12:55.741 "uuid": "cb40db6f-4d9c-57b5-96dd-31519e10fa78", 00:12:55.741 "is_configured": true, 00:12:55.741 "data_offset": 0, 00:12:55.741 "data_size": 65536 00:12:55.741 }, 00:12:55.741 { 00:12:55.741 "name": "BaseBdev2", 00:12:55.741 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:12:55.741 "is_configured": true, 00:12:55.741 "data_offset": 0, 00:12:55.741 "data_size": 65536 00:12:55.741 } 00:12:55.741 ] 00:12:55.741 }' 00:12:56.001 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.001 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.001 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.001 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.001 21:44:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:56.001 21:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.001 21:44:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.001 [2024-09-29 21:44:14.822477] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.001 [2024-09-29 21:44:14.903290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:56.260 [2024-09-29 21:44:15.009962] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.260 [2024-09-29 21:44:15.017696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.260 [2024-09-29 21:44:15.017744] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.260 [2024-09-29 21:44:15.017756] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.260 [2024-09-29 21:44:15.050071] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.260 "name": "raid_bdev1", 00:12:56.260 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:12:56.260 "strip_size_kb": 0, 00:12:56.260 "state": "online", 00:12:56.260 "raid_level": "raid1", 00:12:56.260 "superblock": false, 00:12:56.260 "num_base_bdevs": 2, 00:12:56.260 "num_base_bdevs_discovered": 1, 00:12:56.260 "num_base_bdevs_operational": 1, 00:12:56.260 "base_bdevs_list": [ 00:12:56.260 { 00:12:56.260 "name": null, 00:12:56.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.260 "is_configured": false, 00:12:56.260 "data_offset": 0, 00:12:56.260 "data_size": 65536 00:12:56.260 }, 00:12:56.260 { 00:12:56.260 "name": "BaseBdev2", 00:12:56.260 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:12:56.260 "is_configured": true, 00:12:56.260 "data_offset": 0, 00:12:56.260 "data_size": 65536 00:12:56.260 } 00:12:56.260 ] 00:12:56.260 }' 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.260 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.779 185.50 IOPS, 556.50 MiB/s 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.779 "name": "raid_bdev1", 00:12:56.779 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:12:56.779 "strip_size_kb": 0, 00:12:56.779 "state": "online", 00:12:56.779 "raid_level": "raid1", 00:12:56.779 "superblock": false, 00:12:56.779 "num_base_bdevs": 2, 00:12:56.779 "num_base_bdevs_discovered": 1, 00:12:56.779 "num_base_bdevs_operational": 1, 00:12:56.779 "base_bdevs_list": [ 00:12:56.779 { 00:12:56.779 "name": null, 00:12:56.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.779 "is_configured": false, 00:12:56.779 "data_offset": 0, 00:12:56.779 "data_size": 65536 00:12:56.779 }, 00:12:56.779 { 00:12:56.779 "name": "BaseBdev2", 00:12:56.779 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:12:56.779 "is_configured": true, 00:12:56.779 "data_offset": 0, 00:12:56.779 "data_size": 65536 00:12:56.779 } 00:12:56.779 ] 00:12:56.779 }' 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.779 [2024-09-29 21:44:15.656130] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.779 21:44:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:56.779 [2024-09-29 21:44:15.711849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:56.779 [2024-09-29 21:44:15.713705] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:57.038 [2024-09-29 21:44:15.815702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.039 [2024-09-29 21:44:15.816207] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:57.296 [2024-09-29 21:44:16.044845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:57.555 195.33 IOPS, 586.00 MiB/s [2024-09-29 21:44:16.395747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:57.814 [2024-09-29 21:44:16.631998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:57.814 [2024-09-29 21:44:16.632350] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.814 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.814 "name": "raid_bdev1", 00:12:57.815 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:12:57.815 "strip_size_kb": 0, 00:12:57.815 "state": "online", 00:12:57.815 "raid_level": "raid1", 00:12:57.815 "superblock": false, 00:12:57.815 "num_base_bdevs": 2, 00:12:57.815 "num_base_bdevs_discovered": 2, 00:12:57.815 "num_base_bdevs_operational": 2, 00:12:57.815 "process": { 00:12:57.815 "type": "rebuild", 00:12:57.815 "target": "spare", 00:12:57.815 "progress": { 00:12:57.815 "blocks": 10240, 00:12:57.815 "percent": 15 00:12:57.815 } 00:12:57.815 }, 00:12:57.815 "base_bdevs_list": [ 00:12:57.815 { 00:12:57.815 "name": "spare", 00:12:57.815 "uuid": "cb40db6f-4d9c-57b5-96dd-31519e10fa78", 00:12:57.815 "is_configured": true, 00:12:57.815 "data_offset": 0, 00:12:57.815 "data_size": 65536 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "name": "BaseBdev2", 00:12:57.815 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:12:57.815 "is_configured": true, 00:12:57.815 "data_offset": 0, 00:12:57.815 "data_size": 65536 00:12:57.815 } 00:12:57.815 ] 00:12:57.815 }' 00:12:57.815 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.815 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=412 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.075 "name": "raid_bdev1", 00:12:58.075 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:12:58.075 "strip_size_kb": 0, 00:12:58.075 "state": "online", 00:12:58.075 "raid_level": "raid1", 00:12:58.075 "superblock": false, 00:12:58.075 "num_base_bdevs": 2, 00:12:58.075 "num_base_bdevs_discovered": 2, 00:12:58.075 "num_base_bdevs_operational": 2, 00:12:58.075 "process": { 00:12:58.075 "type": "rebuild", 00:12:58.075 "target": "spare", 00:12:58.075 "progress": { 00:12:58.075 "blocks": 14336, 00:12:58.075 "percent": 21 00:12:58.075 } 00:12:58.075 }, 00:12:58.075 "base_bdevs_list": [ 00:12:58.075 { 00:12:58.075 "name": "spare", 00:12:58.075 "uuid": "cb40db6f-4d9c-57b5-96dd-31519e10fa78", 00:12:58.075 "is_configured": true, 00:12:58.075 "data_offset": 0, 00:12:58.075 "data_size": 65536 00:12:58.075 }, 00:12:58.075 { 00:12:58.075 "name": "BaseBdev2", 00:12:58.075 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:12:58.075 "is_configured": true, 00:12:58.075 "data_offset": 0, 00:12:58.075 "data_size": 65536 00:12:58.075 } 00:12:58.075 ] 00:12:58.075 }' 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.075 [2024-09-29 21:44:16.955712] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.075 21:44:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:58.335 [2024-09-29 21:44:17.182215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:58.594 169.50 IOPS, 508.50 MiB/s [2024-09-29 21:44:17.409143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:58.853 [2024-09-29 21:44:17.629021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:58.854 [2024-09-29 21:44:17.634555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.113 21:44:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.113 21:44:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.113 "name": "raid_bdev1", 00:12:59.113 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:12:59.113 "strip_size_kb": 0, 00:12:59.113 "state": "online", 00:12:59.113 "raid_level": "raid1", 00:12:59.113 "superblock": false, 00:12:59.113 "num_base_bdevs": 2, 00:12:59.113 "num_base_bdevs_discovered": 2, 00:12:59.113 "num_base_bdevs_operational": 2, 00:12:59.113 "process": { 00:12:59.113 "type": "rebuild", 00:12:59.113 "target": "spare", 00:12:59.113 "progress": { 00:12:59.113 "blocks": 30720, 00:12:59.113 "percent": 46 00:12:59.113 } 00:12:59.113 }, 00:12:59.113 "base_bdevs_list": [ 00:12:59.113 { 00:12:59.113 "name": "spare", 00:12:59.113 "uuid": "cb40db6f-4d9c-57b5-96dd-31519e10fa78", 00:12:59.113 "is_configured": true, 00:12:59.113 "data_offset": 0, 00:12:59.113 "data_size": 65536 00:12:59.113 }, 00:12:59.113 { 00:12:59.113 "name": "BaseBdev2", 00:12:59.113 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:12:59.113 "is_configured": true, 00:12:59.113 "data_offset": 0, 00:12:59.113 "data_size": 65536 00:12:59.113 } 00:12:59.113 ] 00:12:59.113 }' 00:12:59.113 21:44:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.113 [2024-09-29 21:44:18.069285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:59.113 21:44:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:59.113 21:44:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.372 21:44:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:59.372 21:44:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.372 147.00 IOPS, 441.00 MiB/s [2024-09-29 21:44:18.287241] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:00.310 [2024-09-29 21:44:18.960262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:00.310 [2024-09-29 21:44:18.960633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:00.310 [2024-09-29 21:44:19.085004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:00.310 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:00.310 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.310 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.310 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.310 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.310 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.310 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.311 "name": "raid_bdev1", 00:13:00.311 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:13:00.311 "strip_size_kb": 0, 00:13:00.311 "state": "online", 00:13:00.311 "raid_level": "raid1", 00:13:00.311 "superblock": false, 00:13:00.311 "num_base_bdevs": 2, 00:13:00.311 "num_base_bdevs_discovered": 2, 00:13:00.311 "num_base_bdevs_operational": 2, 00:13:00.311 "process": { 00:13:00.311 "type": "rebuild", 00:13:00.311 "target": "spare", 00:13:00.311 "progress": { 00:13:00.311 "blocks": 47104, 00:13:00.311 "percent": 71 00:13:00.311 } 00:13:00.311 }, 00:13:00.311 "base_bdevs_list": [ 00:13:00.311 { 00:13:00.311 "name": "spare", 00:13:00.311 "uuid": "cb40db6f-4d9c-57b5-96dd-31519e10fa78", 00:13:00.311 "is_configured": true, 00:13:00.311 "data_offset": 0, 00:13:00.311 "data_size": 65536 00:13:00.311 }, 00:13:00.311 { 00:13:00.311 "name": "BaseBdev2", 00:13:00.311 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:13:00.311 "is_configured": true, 00:13:00.311 "data_offset": 0, 00:13:00.311 "data_size": 65536 00:13:00.311 } 00:13:00.311 ] 00:13:00.311 }' 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.311 21:44:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:00.570 129.50 IOPS, 388.50 MiB/s [2024-09-29 21:44:19.305629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:00.829 [2024-09-29 21:44:19.634795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:00.829 [2024-09-29 21:44:19.740984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:01.088 [2024-09-29 21:44:20.067165] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:01.348 [2024-09-29 21:44:20.166981] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:01.348 [2024-09-29 21:44:20.168950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.348 116.29 IOPS, 348.86 MiB/s 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.348 "name": "raid_bdev1", 00:13:01.348 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:13:01.348 "strip_size_kb": 0, 00:13:01.348 "state": "online", 00:13:01.348 "raid_level": "raid1", 00:13:01.348 "superblock": false, 00:13:01.348 "num_base_bdevs": 2, 00:13:01.348 "num_base_bdevs_discovered": 2, 00:13:01.348 "num_base_bdevs_operational": 2, 00:13:01.348 "base_bdevs_list": [ 00:13:01.348 { 00:13:01.348 "name": "spare", 00:13:01.348 "uuid": "cb40db6f-4d9c-57b5-96dd-31519e10fa78", 00:13:01.348 "is_configured": true, 00:13:01.348 "data_offset": 0, 00:13:01.348 "data_size": 65536 00:13:01.348 }, 00:13:01.348 { 00:13:01.348 "name": "BaseBdev2", 00:13:01.348 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:13:01.348 "is_configured": true, 00:13:01.348 "data_offset": 0, 00:13:01.348 "data_size": 65536 00:13:01.348 } 00:13:01.348 ] 00:13:01.348 }' 00:13:01.348 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.608 "name": "raid_bdev1", 00:13:01.608 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:13:01.608 "strip_size_kb": 0, 00:13:01.608 "state": "online", 00:13:01.608 "raid_level": "raid1", 00:13:01.608 "superblock": false, 00:13:01.608 "num_base_bdevs": 2, 00:13:01.608 "num_base_bdevs_discovered": 2, 00:13:01.608 "num_base_bdevs_operational": 2, 00:13:01.608 "base_bdevs_list": [ 00:13:01.608 { 00:13:01.608 "name": "spare", 00:13:01.608 "uuid": "cb40db6f-4d9c-57b5-96dd-31519e10fa78", 00:13:01.608 "is_configured": true, 00:13:01.608 "data_offset": 0, 00:13:01.608 "data_size": 65536 00:13:01.608 }, 00:13:01.608 { 00:13:01.608 "name": "BaseBdev2", 00:13:01.608 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:13:01.608 "is_configured": true, 00:13:01.608 "data_offset": 0, 00:13:01.608 "data_size": 65536 00:13:01.608 } 00:13:01.608 ] 00:13:01.608 }' 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.608 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.868 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.868 "name": "raid_bdev1", 00:13:01.868 "uuid": "b7aded57-0ce4-414c-b994-93cbda7de255", 00:13:01.868 "strip_size_kb": 0, 00:13:01.868 "state": "online", 00:13:01.868 "raid_level": "raid1", 00:13:01.868 "superblock": false, 00:13:01.868 "num_base_bdevs": 2, 00:13:01.868 "num_base_bdevs_discovered": 2, 00:13:01.868 "num_base_bdevs_operational": 2, 00:13:01.868 "base_bdevs_list": [ 00:13:01.868 { 00:13:01.868 "name": "spare", 00:13:01.868 "uuid": "cb40db6f-4d9c-57b5-96dd-31519e10fa78", 00:13:01.868 "is_configured": true, 00:13:01.868 "data_offset": 0, 00:13:01.868 "data_size": 65536 00:13:01.868 }, 00:13:01.868 { 00:13:01.868 "name": "BaseBdev2", 00:13:01.868 "uuid": "880c7bc0-1119-51de-a3f8-ccbecae05177", 00:13:01.868 "is_configured": true, 00:13:01.868 "data_offset": 0, 00:13:01.868 "data_size": 65536 00:13:01.868 } 00:13:01.868 ] 00:13:01.868 }' 00:13:01.868 21:44:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.868 21:44:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.128 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.128 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.128 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.128 [2024-09-29 21:44:21.029732] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.128 [2024-09-29 21:44:21.029775] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.388 00:13:02.388 Latency(us) 00:13:02.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.388 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:02.388 raid_bdev1 : 7.86 107.17 321.50 0.00 0.00 12253.26 291.55 113557.58 00:13:02.388 =================================================================================================================== 00:13:02.388 Total : 107.17 321.50 0.00 0.00 12253.26 291.55 113557.58 00:13:02.388 [2024-09-29 21:44:21.149866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.388 [2024-09-29 21:44:21.149909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.388 [2024-09-29 21:44:21.149982] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.388 [2024-09-29 21:44:21.149992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:02.388 { 00:13:02.388 "results": [ 00:13:02.388 { 00:13:02.388 "job": "raid_bdev1", 00:13:02.388 "core_mask": "0x1", 00:13:02.388 "workload": "randrw", 00:13:02.388 "percentage": 50, 00:13:02.388 "status": "finished", 00:13:02.388 "queue_depth": 2, 00:13:02.388 "io_size": 3145728, 00:13:02.388 "runtime": 7.856816, 00:13:02.388 "iops": 107.16809455636991, 00:13:02.388 "mibps": 321.5042836691097, 00:13:02.388 "io_failed": 0, 00:13:02.388 "io_timeout": 0, 00:13:02.388 "avg_latency_us": 12253.262662199588, 00:13:02.388 "min_latency_us": 291.54934497816595, 00:13:02.388 "max_latency_us": 113557.57554585153 00:13:02.388 } 00:13:02.389 ], 00:13:02.389 "core_count": 1 00:13:02.389 } 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.389 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:02.649 /dev/nbd0 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.649 1+0 records in 00:13:02.649 1+0 records out 00:13:02.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385243 s, 10.6 MB/s 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.649 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:02.909 /dev/nbd1 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:02.909 1+0 records in 00:13:02.909 1+0 records out 00:13:02.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425711 s, 9.6 MB/s 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:02.909 21:44:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:03.169 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:03.170 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76538 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76538 ']' 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76538 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76538 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:03.430 killing process with pid 76538 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76538' 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76538 00:13:03.430 Received shutdown signal, test time was about 9.094682 seconds 00:13:03.430 00:13:03.430 Latency(us) 00:13:03.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.430 =================================================================================================================== 00:13:03.430 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:03.430 [2024-09-29 21:44:22.364937] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.430 21:44:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76538 00:13:03.690 [2024-09-29 21:44:22.582679] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:05.072 00:13:05.072 real 0m12.377s 00:13:05.072 user 0m15.500s 00:13:05.072 sys 0m1.536s 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.072 ************************************ 00:13:05.072 END TEST raid_rebuild_test_io 00:13:05.072 ************************************ 00:13:05.072 21:44:23 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:05.072 21:44:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:05.072 21:44:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.072 21:44:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.072 ************************************ 00:13:05.072 START TEST raid_rebuild_test_sb_io 00:13:05.072 ************************************ 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.072 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76914 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76914 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 76914 ']' 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.073 21:44:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.073 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:05.073 Zero copy mechanism will not be used. 00:13:05.073 [2024-09-29 21:44:24.023064] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:05.073 [2024-09-29 21:44:24.023172] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76914 ] 00:13:05.333 [2024-09-29 21:44:24.185098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.592 [2024-09-29 21:44:24.387524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.592 [2024-09-29 21:44:24.569756] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.592 [2024-09-29 21:44:24.569806] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.162 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.163 BaseBdev1_malloc 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.163 [2024-09-29 21:44:24.883779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:06.163 [2024-09-29 21:44:24.883856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.163 [2024-09-29 21:44:24.883885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:06.163 [2024-09-29 21:44:24.883902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.163 [2024-09-29 21:44:24.885882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.163 [2024-09-29 21:44:24.885928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.163 BaseBdev1 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.163 BaseBdev2_malloc 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.163 [2024-09-29 21:44:24.949911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:06.163 [2024-09-29 21:44:24.949978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.163 [2024-09-29 21:44:24.950002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:06.163 [2024-09-29 21:44:24.950014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.163 [2024-09-29 21:44:24.951996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.163 [2024-09-29 21:44:24.952050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:06.163 BaseBdev2 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.163 spare_malloc 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.163 21:44:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.163 spare_delay 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.163 [2024-09-29 21:44:25.016371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.163 [2024-09-29 21:44:25.016435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.163 [2024-09-29 21:44:25.016457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:06.163 [2024-09-29 21:44:25.016470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.163 [2024-09-29 21:44:25.018410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.163 [2024-09-29 21:44:25.018453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.163 spare 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.163 [2024-09-29 21:44:25.028406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.163 [2024-09-29 21:44:25.030145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.163 [2024-09-29 21:44:25.030324] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:06.163 [2024-09-29 21:44:25.030351] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.163 [2024-09-29 21:44:25.030612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:06.163 [2024-09-29 21:44:25.030793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:06.163 [2024-09-29 21:44:25.030813] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:06.163 [2024-09-29 21:44:25.030970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.163 "name": "raid_bdev1", 00:13:06.163 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:06.163 "strip_size_kb": 0, 00:13:06.163 "state": "online", 00:13:06.163 "raid_level": "raid1", 00:13:06.163 "superblock": true, 00:13:06.163 "num_base_bdevs": 2, 00:13:06.163 "num_base_bdevs_discovered": 2, 00:13:06.163 "num_base_bdevs_operational": 2, 00:13:06.163 "base_bdevs_list": [ 00:13:06.163 { 00:13:06.163 "name": "BaseBdev1", 00:13:06.163 "uuid": "23d2f2ea-6f91-5d17-8ab7-565ec9423b82", 00:13:06.163 "is_configured": true, 00:13:06.163 "data_offset": 2048, 00:13:06.163 "data_size": 63488 00:13:06.163 }, 00:13:06.163 { 00:13:06.163 "name": "BaseBdev2", 00:13:06.163 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:06.163 "is_configured": true, 00:13:06.163 "data_offset": 2048, 00:13:06.163 "data_size": 63488 00:13:06.163 } 00:13:06.163 ] 00:13:06.163 }' 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.163 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.734 [2024-09-29 21:44:25.540204] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.734 [2024-09-29 21:44:25.639719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.734 "name": "raid_bdev1", 00:13:06.734 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:06.734 "strip_size_kb": 0, 00:13:06.734 "state": "online", 00:13:06.734 "raid_level": "raid1", 00:13:06.734 "superblock": true, 00:13:06.734 "num_base_bdevs": 2, 00:13:06.734 "num_base_bdevs_discovered": 1, 00:13:06.734 "num_base_bdevs_operational": 1, 00:13:06.734 "base_bdevs_list": [ 00:13:06.734 { 00:13:06.734 "name": null, 00:13:06.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.734 "is_configured": false, 00:13:06.734 "data_offset": 0, 00:13:06.734 "data_size": 63488 00:13:06.734 }, 00:13:06.734 { 00:13:06.734 "name": "BaseBdev2", 00:13:06.734 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:06.734 "is_configured": true, 00:13:06.734 "data_offset": 2048, 00:13:06.734 "data_size": 63488 00:13:06.734 } 00:13:06.734 ] 00:13:06.734 }' 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.734 21:44:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.993 [2024-09-29 21:44:25.738986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:06.993 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:06.993 Zero copy mechanism will not be used. 00:13:06.993 Running I/O for 60 seconds... 00:13:07.253 21:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.253 21:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.253 21:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.253 [2024-09-29 21:44:26.106893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.253 21:44:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.253 21:44:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:07.253 [2024-09-29 21:44:26.154890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:07.253 [2024-09-29 21:44:26.156705] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:07.650 [2024-09-29 21:44:26.280598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:07.650 [2024-09-29 21:44:26.281074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:07.651 [2024-09-29 21:44:26.490269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:07.651 [2024-09-29 21:44:26.490627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:07.944 [2024-09-29 21:44:26.718800] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:07.944 [2024-09-29 21:44:26.719224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:08.225 214.00 IOPS, 642.00 MiB/s [2024-09-29 21:44:26.961464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.225 "name": "raid_bdev1", 00:13:08.225 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:08.225 "strip_size_kb": 0, 00:13:08.225 "state": "online", 00:13:08.225 "raid_level": "raid1", 00:13:08.225 "superblock": true, 00:13:08.225 "num_base_bdevs": 2, 00:13:08.225 "num_base_bdevs_discovered": 2, 00:13:08.225 "num_base_bdevs_operational": 2, 00:13:08.225 "process": { 00:13:08.225 "type": "rebuild", 00:13:08.225 "target": "spare", 00:13:08.225 "progress": { 00:13:08.225 "blocks": 10240, 00:13:08.225 "percent": 16 00:13:08.225 } 00:13:08.225 }, 00:13:08.225 "base_bdevs_list": [ 00:13:08.225 { 00:13:08.225 "name": "spare", 00:13:08.225 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:08.225 "is_configured": true, 00:13:08.225 "data_offset": 2048, 00:13:08.225 "data_size": 63488 00:13:08.225 }, 00:13:08.225 { 00:13:08.225 "name": "BaseBdev2", 00:13:08.225 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:08.225 "is_configured": true, 00:13:08.225 "data_offset": 2048, 00:13:08.225 "data_size": 63488 00:13:08.225 } 00:13:08.225 ] 00:13:08.225 }' 00:13:08.225 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.485 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.485 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.485 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.485 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:08.485 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.485 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.485 [2024-09-29 21:44:27.296397] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.486 [2024-09-29 21:44:27.296467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:08.486 [2024-09-29 21:44:27.397307] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:08.486 [2024-09-29 21:44:27.405064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.486 [2024-09-29 21:44:27.405108] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:08.486 [2024-09-29 21:44:27.405125] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:08.486 [2024-09-29 21:44:27.453148] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:08.486 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.486 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.486 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.486 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.486 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.486 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.486 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.746 "name": "raid_bdev1", 00:13:08.746 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:08.746 "strip_size_kb": 0, 00:13:08.746 "state": "online", 00:13:08.746 "raid_level": "raid1", 00:13:08.746 "superblock": true, 00:13:08.746 "num_base_bdevs": 2, 00:13:08.746 "num_base_bdevs_discovered": 1, 00:13:08.746 "num_base_bdevs_operational": 1, 00:13:08.746 "base_bdevs_list": [ 00:13:08.746 { 00:13:08.746 "name": null, 00:13:08.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.746 "is_configured": false, 00:13:08.746 "data_offset": 0, 00:13:08.746 "data_size": 63488 00:13:08.746 }, 00:13:08.746 { 00:13:08.746 "name": "BaseBdev2", 00:13:08.746 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:08.746 "is_configured": true, 00:13:08.746 "data_offset": 2048, 00:13:08.746 "data_size": 63488 00:13:08.746 } 00:13:08.746 ] 00:13:08.746 }' 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.746 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.006 182.50 IOPS, 547.50 MiB/s 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.006 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.006 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.006 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.006 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.006 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.006 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.006 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.006 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.006 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.266 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.266 "name": "raid_bdev1", 00:13:09.266 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:09.266 "strip_size_kb": 0, 00:13:09.266 "state": "online", 00:13:09.266 "raid_level": "raid1", 00:13:09.266 "superblock": true, 00:13:09.266 "num_base_bdevs": 2, 00:13:09.266 "num_base_bdevs_discovered": 1, 00:13:09.266 "num_base_bdevs_operational": 1, 00:13:09.266 "base_bdevs_list": [ 00:13:09.266 { 00:13:09.266 "name": null, 00:13:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.266 "is_configured": false, 00:13:09.266 "data_offset": 0, 00:13:09.266 "data_size": 63488 00:13:09.266 }, 00:13:09.266 { 00:13:09.266 "name": "BaseBdev2", 00:13:09.266 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:09.266 "is_configured": true, 00:13:09.266 "data_offset": 2048, 00:13:09.266 "data_size": 63488 00:13:09.266 } 00:13:09.266 ] 00:13:09.266 }' 00:13:09.266 21:44:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.266 21:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.266 21:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.266 21:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.266 21:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.266 21:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.266 21:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.266 [2024-09-29 21:44:28.088564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.266 21:44:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.266 21:44:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:09.266 [2024-09-29 21:44:28.151247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:09.266 [2024-09-29 21:44:28.153145] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.526 [2024-09-29 21:44:28.253639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:09.526 [2024-09-29 21:44:28.254133] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:09.526 [2024-09-29 21:44:28.374170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:09.526 [2024-09-29 21:44:28.374507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:09.787 [2024-09-29 21:44:28.603358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:09.787 [2024-09-29 21:44:28.603834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:10.046 180.67 IOPS, 542.00 MiB/s [2024-09-29 21:44:28.812289] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:10.046 [2024-09-29 21:44:28.812617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.306 "name": "raid_bdev1", 00:13:10.306 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:10.306 "strip_size_kb": 0, 00:13:10.306 "state": "online", 00:13:10.306 "raid_level": "raid1", 00:13:10.306 "superblock": true, 00:13:10.306 "num_base_bdevs": 2, 00:13:10.306 "num_base_bdevs_discovered": 2, 00:13:10.306 "num_base_bdevs_operational": 2, 00:13:10.306 "process": { 00:13:10.306 "type": "rebuild", 00:13:10.306 "target": "spare", 00:13:10.306 "progress": { 00:13:10.306 "blocks": 14336, 00:13:10.306 "percent": 22 00:13:10.306 } 00:13:10.306 }, 00:13:10.306 "base_bdevs_list": [ 00:13:10.306 { 00:13:10.306 "name": "spare", 00:13:10.306 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:10.306 "is_configured": true, 00:13:10.306 "data_offset": 2048, 00:13:10.306 "data_size": 63488 00:13:10.306 }, 00:13:10.306 { 00:13:10.306 "name": "BaseBdev2", 00:13:10.306 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:10.306 "is_configured": true, 00:13:10.306 "data_offset": 2048, 00:13:10.306 "data_size": 63488 00:13:10.306 } 00:13:10.306 ] 00:13:10.306 }' 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.306 [2024-09-29 21:44:29.248811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:10.306 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=425 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.306 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.565 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.565 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.565 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.565 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.565 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.565 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.565 "name": "raid_bdev1", 00:13:10.565 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:10.565 "strip_size_kb": 0, 00:13:10.565 "state": "online", 00:13:10.565 "raid_level": "raid1", 00:13:10.565 "superblock": true, 00:13:10.565 "num_base_bdevs": 2, 00:13:10.566 "num_base_bdevs_discovered": 2, 00:13:10.566 "num_base_bdevs_operational": 2, 00:13:10.566 "process": { 00:13:10.566 "type": "rebuild", 00:13:10.566 "target": "spare", 00:13:10.566 "progress": { 00:13:10.566 "blocks": 16384, 00:13:10.566 "percent": 25 00:13:10.566 } 00:13:10.566 }, 00:13:10.566 "base_bdevs_list": [ 00:13:10.566 { 00:13:10.566 "name": "spare", 00:13:10.566 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:10.566 "is_configured": true, 00:13:10.566 "data_offset": 2048, 00:13:10.566 "data_size": 63488 00:13:10.566 }, 00:13:10.566 { 00:13:10.566 "name": "BaseBdev2", 00:13:10.566 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:10.566 "is_configured": true, 00:13:10.566 "data_offset": 2048, 00:13:10.566 "data_size": 63488 00:13:10.566 } 00:13:10.566 ] 00:13:10.566 }' 00:13:10.566 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.566 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.566 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.566 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.566 21:44:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:10.566 [2024-09-29 21:44:29.502693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:10.825 [2024-09-29 21:44:29.623143] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:11.085 154.00 IOPS, 462.00 MiB/s [2024-09-29 21:44:29.955998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:11.654 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.654 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.654 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.654 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.654 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.654 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.654 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.654 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.655 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.655 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.655 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.655 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.655 "name": "raid_bdev1", 00:13:11.655 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:11.655 "strip_size_kb": 0, 00:13:11.655 "state": "online", 00:13:11.655 "raid_level": "raid1", 00:13:11.655 "superblock": true, 00:13:11.655 "num_base_bdevs": 2, 00:13:11.655 "num_base_bdevs_discovered": 2, 00:13:11.655 "num_base_bdevs_operational": 2, 00:13:11.655 "process": { 00:13:11.655 "type": "rebuild", 00:13:11.655 "target": "spare", 00:13:11.655 "progress": { 00:13:11.655 "blocks": 36864, 00:13:11.655 "percent": 58 00:13:11.655 } 00:13:11.655 }, 00:13:11.655 "base_bdevs_list": [ 00:13:11.655 { 00:13:11.655 "name": "spare", 00:13:11.655 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:11.655 "is_configured": true, 00:13:11.655 "data_offset": 2048, 00:13:11.655 "data_size": 63488 00:13:11.655 }, 00:13:11.655 { 00:13:11.655 "name": "BaseBdev2", 00:13:11.655 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:11.655 "is_configured": true, 00:13:11.655 "data_offset": 2048, 00:13:11.655 "data_size": 63488 00:13:11.655 } 00:13:11.655 ] 00:13:11.655 }' 00:13:11.655 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.655 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.655 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.655 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.655 21:44:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:12.174 132.60 IOPS, 397.80 MiB/s [2024-09-29 21:44:30.935279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:12.174 [2024-09-29 21:44:31.155536] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.743 [2024-09-29 21:44:31.572333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:12.743 [2024-09-29 21:44:31.572702] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.743 "name": "raid_bdev1", 00:13:12.743 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:12.743 "strip_size_kb": 0, 00:13:12.743 "state": "online", 00:13:12.743 "raid_level": "raid1", 00:13:12.743 "superblock": true, 00:13:12.743 "num_base_bdevs": 2, 00:13:12.743 "num_base_bdevs_discovered": 2, 00:13:12.743 "num_base_bdevs_operational": 2, 00:13:12.743 "process": { 00:13:12.743 "type": "rebuild", 00:13:12.743 "target": "spare", 00:13:12.743 "progress": { 00:13:12.743 "blocks": 59392, 00:13:12.743 "percent": 93 00:13:12.743 } 00:13:12.743 }, 00:13:12.743 "base_bdevs_list": [ 00:13:12.743 { 00:13:12.743 "name": "spare", 00:13:12.743 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:12.743 "is_configured": true, 00:13:12.743 "data_offset": 2048, 00:13:12.743 "data_size": 63488 00:13:12.743 }, 00:13:12.743 { 00:13:12.743 "name": "BaseBdev2", 00:13:12.743 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:12.743 "is_configured": true, 00:13:12.743 "data_offset": 2048, 00:13:12.743 "data_size": 63488 00:13:12.743 } 00:13:12.743 ] 00:13:12.743 }' 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.743 21:44:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.003 118.67 IOPS, 356.00 MiB/s [2024-09-29 21:44:31.809494] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:13.003 [2024-09-29 21:44:31.914673] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:13.003 [2024-09-29 21:44:31.916852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.941 106.29 IOPS, 318.86 MiB/s 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.941 "name": "raid_bdev1", 00:13:13.941 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:13.941 "strip_size_kb": 0, 00:13:13.941 "state": "online", 00:13:13.941 "raid_level": "raid1", 00:13:13.941 "superblock": true, 00:13:13.941 "num_base_bdevs": 2, 00:13:13.941 "num_base_bdevs_discovered": 2, 00:13:13.941 "num_base_bdevs_operational": 2, 00:13:13.941 "base_bdevs_list": [ 00:13:13.941 { 00:13:13.941 "name": "spare", 00:13:13.941 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:13.941 "is_configured": true, 00:13:13.941 "data_offset": 2048, 00:13:13.941 "data_size": 63488 00:13:13.941 }, 00:13:13.941 { 00:13:13.941 "name": "BaseBdev2", 00:13:13.941 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:13.941 "is_configured": true, 00:13:13.941 "data_offset": 2048, 00:13:13.941 "data_size": 63488 00:13:13.941 } 00:13:13.941 ] 00:13:13.941 }' 00:13:13.941 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.942 "name": "raid_bdev1", 00:13:13.942 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:13.942 "strip_size_kb": 0, 00:13:13.942 "state": "online", 00:13:13.942 "raid_level": "raid1", 00:13:13.942 "superblock": true, 00:13:13.942 "num_base_bdevs": 2, 00:13:13.942 "num_base_bdevs_discovered": 2, 00:13:13.942 "num_base_bdevs_operational": 2, 00:13:13.942 "base_bdevs_list": [ 00:13:13.942 { 00:13:13.942 "name": "spare", 00:13:13.942 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:13.942 "is_configured": true, 00:13:13.942 "data_offset": 2048, 00:13:13.942 "data_size": 63488 00:13:13.942 }, 00:13:13.942 { 00:13:13.942 "name": "BaseBdev2", 00:13:13.942 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:13.942 "is_configured": true, 00:13:13.942 "data_offset": 2048, 00:13:13.942 "data_size": 63488 00:13:13.942 } 00:13:13.942 ] 00:13:13.942 }' 00:13:13.942 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.202 21:44:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.202 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.202 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.202 "name": "raid_bdev1", 00:13:14.202 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:14.202 "strip_size_kb": 0, 00:13:14.202 "state": "online", 00:13:14.202 "raid_level": "raid1", 00:13:14.202 "superblock": true, 00:13:14.202 "num_base_bdevs": 2, 00:13:14.202 "num_base_bdevs_discovered": 2, 00:13:14.202 "num_base_bdevs_operational": 2, 00:13:14.202 "base_bdevs_list": [ 00:13:14.202 { 00:13:14.202 "name": "spare", 00:13:14.202 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:14.202 "is_configured": true, 00:13:14.202 "data_offset": 2048, 00:13:14.202 "data_size": 63488 00:13:14.202 }, 00:13:14.202 { 00:13:14.202 "name": "BaseBdev2", 00:13:14.202 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:14.202 "is_configured": true, 00:13:14.202 "data_offset": 2048, 00:13:14.202 "data_size": 63488 00:13:14.202 } 00:13:14.202 ] 00:13:14.202 }' 00:13:14.202 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.202 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.462 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:14.462 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.462 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.462 [2024-09-29 21:44:33.405831] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.462 [2024-09-29 21:44:33.405867] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.722 00:13:14.722 Latency(us) 00:13:14.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.722 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:14.722 raid_bdev1 : 7.78 98.85 296.55 0.00 0.00 13728.85 291.55 112641.79 00:13:14.722 =================================================================================================================== 00:13:14.722 Total : 98.85 296.55 0.00 0.00 13728.85 291.55 112641.79 00:13:14.722 [2024-09-29 21:44:33.524904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.722 [2024-09-29 21:44:33.524949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.722 [2024-09-29 21:44:33.525022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.722 [2024-09-29 21:44:33.525058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:14.722 { 00:13:14.722 "results": [ 00:13:14.722 { 00:13:14.722 "job": "raid_bdev1", 00:13:14.722 "core_mask": "0x1", 00:13:14.722 "workload": "randrw", 00:13:14.722 "percentage": 50, 00:13:14.722 "status": "finished", 00:13:14.722 "queue_depth": 2, 00:13:14.722 "io_size": 3145728, 00:13:14.722 "runtime": 7.779481, 00:13:14.722 "iops": 98.8497818813363, 00:13:14.722 "mibps": 296.5493456440089, 00:13:14.722 "io_failed": 0, 00:13:14.722 "io_timeout": 0, 00:13:14.722 "avg_latency_us": 13728.848785640059, 00:13:14.722 "min_latency_us": 291.54934497816595, 00:13:14.722 "max_latency_us": 112641.78864628822 00:13:14.722 } 00:13:14.722 ], 00:13:14.722 "core_count": 1 00:13:14.722 } 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:14.722 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:14.723 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:14.723 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:14.723 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:14.723 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:14.723 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:14.723 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:14.723 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:14.983 /dev/nbd0 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.983 1+0 records in 00:13:14.983 1+0 records out 00:13:14.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461817 s, 8.9 MB/s 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:14.983 21:44:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:15.243 /dev/nbd1 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.243 1+0 records in 00:13:15.243 1+0 records out 00:13:15.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553403 s, 7.4 MB/s 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:15.243 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:15.503 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:15.503 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.503 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:15.503 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.503 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:15.503 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.503 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.763 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:16.023 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.023 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.023 [2024-09-29 21:44:34.753321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:16.024 [2024-09-29 21:44:34.753480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.024 [2024-09-29 21:44:34.753533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:16.024 [2024-09-29 21:44:34.753573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.024 [2024-09-29 21:44:34.755887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.024 [2024-09-29 21:44:34.755975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:16.024 [2024-09-29 21:44:34.756118] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:16.024 [2024-09-29 21:44:34.756228] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.024 [2024-09-29 21:44:34.756420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.024 spare 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.024 [2024-09-29 21:44:34.856367] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:16.024 [2024-09-29 21:44:34.856452] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:16.024 [2024-09-29 21:44:34.856770] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:16.024 [2024-09-29 21:44:34.856999] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:16.024 [2024-09-29 21:44:34.857065] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:16.024 [2024-09-29 21:44:34.857339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.024 "name": "raid_bdev1", 00:13:16.024 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:16.024 "strip_size_kb": 0, 00:13:16.024 "state": "online", 00:13:16.024 "raid_level": "raid1", 00:13:16.024 "superblock": true, 00:13:16.024 "num_base_bdevs": 2, 00:13:16.024 "num_base_bdevs_discovered": 2, 00:13:16.024 "num_base_bdevs_operational": 2, 00:13:16.024 "base_bdevs_list": [ 00:13:16.024 { 00:13:16.024 "name": "spare", 00:13:16.024 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:16.024 "is_configured": true, 00:13:16.024 "data_offset": 2048, 00:13:16.024 "data_size": 63488 00:13:16.024 }, 00:13:16.024 { 00:13:16.024 "name": "BaseBdev2", 00:13:16.024 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:16.024 "is_configured": true, 00:13:16.024 "data_offset": 2048, 00:13:16.024 "data_size": 63488 00:13:16.024 } 00:13:16.024 ] 00:13:16.024 }' 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.024 21:44:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.594 "name": "raid_bdev1", 00:13:16.594 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:16.594 "strip_size_kb": 0, 00:13:16.594 "state": "online", 00:13:16.594 "raid_level": "raid1", 00:13:16.594 "superblock": true, 00:13:16.594 "num_base_bdevs": 2, 00:13:16.594 "num_base_bdevs_discovered": 2, 00:13:16.594 "num_base_bdevs_operational": 2, 00:13:16.594 "base_bdevs_list": [ 00:13:16.594 { 00:13:16.594 "name": "spare", 00:13:16.594 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:16.594 "is_configured": true, 00:13:16.594 "data_offset": 2048, 00:13:16.594 "data_size": 63488 00:13:16.594 }, 00:13:16.594 { 00:13:16.594 "name": "BaseBdev2", 00:13:16.594 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:16.594 "is_configured": true, 00:13:16.594 "data_offset": 2048, 00:13:16.594 "data_size": 63488 00:13:16.594 } 00:13:16.594 ] 00:13:16.594 }' 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.594 [2024-09-29 21:44:35.464403] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.594 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.595 "name": "raid_bdev1", 00:13:16.595 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:16.595 "strip_size_kb": 0, 00:13:16.595 "state": "online", 00:13:16.595 "raid_level": "raid1", 00:13:16.595 "superblock": true, 00:13:16.595 "num_base_bdevs": 2, 00:13:16.595 "num_base_bdevs_discovered": 1, 00:13:16.595 "num_base_bdevs_operational": 1, 00:13:16.595 "base_bdevs_list": [ 00:13:16.595 { 00:13:16.595 "name": null, 00:13:16.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.595 "is_configured": false, 00:13:16.595 "data_offset": 0, 00:13:16.595 "data_size": 63488 00:13:16.595 }, 00:13:16.595 { 00:13:16.595 "name": "BaseBdev2", 00:13:16.595 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:16.595 "is_configured": true, 00:13:16.595 "data_offset": 2048, 00:13:16.595 "data_size": 63488 00:13:16.595 } 00:13:16.595 ] 00:13:16.595 }' 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.595 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.165 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:17.165 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.165 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.165 [2024-09-29 21:44:35.916000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.165 [2024-09-29 21:44:35.916251] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:17.165 [2024-09-29 21:44:35.916271] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:17.165 [2024-09-29 21:44:35.916327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:17.165 [2024-09-29 21:44:35.932160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:17.165 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.165 21:44:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:17.165 [2024-09-29 21:44:35.934005] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.102 "name": "raid_bdev1", 00:13:18.102 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:18.102 "strip_size_kb": 0, 00:13:18.102 "state": "online", 00:13:18.102 "raid_level": "raid1", 00:13:18.102 "superblock": true, 00:13:18.102 "num_base_bdevs": 2, 00:13:18.102 "num_base_bdevs_discovered": 2, 00:13:18.102 "num_base_bdevs_operational": 2, 00:13:18.102 "process": { 00:13:18.102 "type": "rebuild", 00:13:18.102 "target": "spare", 00:13:18.102 "progress": { 00:13:18.102 "blocks": 20480, 00:13:18.102 "percent": 32 00:13:18.102 } 00:13:18.102 }, 00:13:18.102 "base_bdevs_list": [ 00:13:18.102 { 00:13:18.102 "name": "spare", 00:13:18.102 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:18.102 "is_configured": true, 00:13:18.102 "data_offset": 2048, 00:13:18.102 "data_size": 63488 00:13:18.102 }, 00:13:18.102 { 00:13:18.102 "name": "BaseBdev2", 00:13:18.102 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:18.102 "is_configured": true, 00:13:18.102 "data_offset": 2048, 00:13:18.102 "data_size": 63488 00:13:18.102 } 00:13:18.102 ] 00:13:18.102 }' 00:13:18.102 21:44:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.102 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:18.102 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.362 [2024-09-29 21:44:37.097380] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.362 [2024-09-29 21:44:37.139304] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:18.362 [2024-09-29 21:44:37.139377] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.362 [2024-09-29 21:44:37.139393] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.362 [2024-09-29 21:44:37.139404] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.362 "name": "raid_bdev1", 00:13:18.362 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:18.362 "strip_size_kb": 0, 00:13:18.362 "state": "online", 00:13:18.362 "raid_level": "raid1", 00:13:18.362 "superblock": true, 00:13:18.362 "num_base_bdevs": 2, 00:13:18.362 "num_base_bdevs_discovered": 1, 00:13:18.362 "num_base_bdevs_operational": 1, 00:13:18.362 "base_bdevs_list": [ 00:13:18.362 { 00:13:18.362 "name": null, 00:13:18.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.362 "is_configured": false, 00:13:18.362 "data_offset": 0, 00:13:18.362 "data_size": 63488 00:13:18.362 }, 00:13:18.362 { 00:13:18.362 "name": "BaseBdev2", 00:13:18.362 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:18.362 "is_configured": true, 00:13:18.362 "data_offset": 2048, 00:13:18.362 "data_size": 63488 00:13:18.362 } 00:13:18.362 ] 00:13:18.362 }' 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.362 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.622 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:18.622 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.622 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.622 [2024-09-29 21:44:37.593172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:18.622 [2024-09-29 21:44:37.593298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.622 [2024-09-29 21:44:37.593343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:18.622 [2024-09-29 21:44:37.593379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.622 [2024-09-29 21:44:37.593922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.622 [2024-09-29 21:44:37.593995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:18.622 [2024-09-29 21:44:37.594151] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:18.622 [2024-09-29 21:44:37.594207] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:18.622 [2024-09-29 21:44:37.594261] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:18.622 [2024-09-29 21:44:37.594327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.881 [2024-09-29 21:44:37.608922] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:18.881 spare 00:13:18.881 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.881 [2024-09-29 21:44:37.610685] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.881 21:44:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.819 "name": "raid_bdev1", 00:13:19.819 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:19.819 "strip_size_kb": 0, 00:13:19.819 "state": "online", 00:13:19.819 "raid_level": "raid1", 00:13:19.819 "superblock": true, 00:13:19.819 "num_base_bdevs": 2, 00:13:19.819 "num_base_bdevs_discovered": 2, 00:13:19.819 "num_base_bdevs_operational": 2, 00:13:19.819 "process": { 00:13:19.819 "type": "rebuild", 00:13:19.819 "target": "spare", 00:13:19.819 "progress": { 00:13:19.819 "blocks": 20480, 00:13:19.819 "percent": 32 00:13:19.819 } 00:13:19.819 }, 00:13:19.819 "base_bdevs_list": [ 00:13:19.819 { 00:13:19.819 "name": "spare", 00:13:19.819 "uuid": "c8eb79a3-dacf-5a27-9f97-8c1a00db0993", 00:13:19.819 "is_configured": true, 00:13:19.819 "data_offset": 2048, 00:13:19.819 "data_size": 63488 00:13:19.819 }, 00:13:19.819 { 00:13:19.819 "name": "BaseBdev2", 00:13:19.819 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:19.819 "is_configured": true, 00:13:19.819 "data_offset": 2048, 00:13:19.819 "data_size": 63488 00:13:19.819 } 00:13:19.819 ] 00:13:19.819 }' 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.819 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.819 [2024-09-29 21:44:38.774760] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.078 [2024-09-29 21:44:38.815689] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.078 [2024-09-29 21:44:38.815759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.079 [2024-09-29 21:44:38.815780] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.079 [2024-09-29 21:44:38.815789] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.079 "name": "raid_bdev1", 00:13:20.079 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:20.079 "strip_size_kb": 0, 00:13:20.079 "state": "online", 00:13:20.079 "raid_level": "raid1", 00:13:20.079 "superblock": true, 00:13:20.079 "num_base_bdevs": 2, 00:13:20.079 "num_base_bdevs_discovered": 1, 00:13:20.079 "num_base_bdevs_operational": 1, 00:13:20.079 "base_bdevs_list": [ 00:13:20.079 { 00:13:20.079 "name": null, 00:13:20.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.079 "is_configured": false, 00:13:20.079 "data_offset": 0, 00:13:20.079 "data_size": 63488 00:13:20.079 }, 00:13:20.079 { 00:13:20.079 "name": "BaseBdev2", 00:13:20.079 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:20.079 "is_configured": true, 00:13:20.079 "data_offset": 2048, 00:13:20.079 "data_size": 63488 00:13:20.079 } 00:13:20.079 ] 00:13:20.079 }' 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.079 21:44:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.337 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.337 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.337 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.337 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.337 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.596 "name": "raid_bdev1", 00:13:20.596 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:20.596 "strip_size_kb": 0, 00:13:20.596 "state": "online", 00:13:20.596 "raid_level": "raid1", 00:13:20.596 "superblock": true, 00:13:20.596 "num_base_bdevs": 2, 00:13:20.596 "num_base_bdevs_discovered": 1, 00:13:20.596 "num_base_bdevs_operational": 1, 00:13:20.596 "base_bdevs_list": [ 00:13:20.596 { 00:13:20.596 "name": null, 00:13:20.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.596 "is_configured": false, 00:13:20.596 "data_offset": 0, 00:13:20.596 "data_size": 63488 00:13:20.596 }, 00:13:20.596 { 00:13:20.596 "name": "BaseBdev2", 00:13:20.596 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:20.596 "is_configured": true, 00:13:20.596 "data_offset": 2048, 00:13:20.596 "data_size": 63488 00:13:20.596 } 00:13:20.596 ] 00:13:20.596 }' 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.596 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:20.597 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.597 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.597 [2024-09-29 21:44:39.476669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:20.597 [2024-09-29 21:44:39.476778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.597 [2024-09-29 21:44:39.476823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:20.597 [2024-09-29 21:44:39.476855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.597 [2024-09-29 21:44:39.477334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.597 [2024-09-29 21:44:39.477399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:20.597 [2024-09-29 21:44:39.477516] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:20.597 [2024-09-29 21:44:39.477563] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:20.597 [2024-09-29 21:44:39.477614] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:20.597 [2024-09-29 21:44:39.477685] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:20.597 BaseBdev1 00:13:20.597 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.597 21:44:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.534 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.794 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.794 "name": "raid_bdev1", 00:13:21.794 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:21.794 "strip_size_kb": 0, 00:13:21.794 "state": "online", 00:13:21.794 "raid_level": "raid1", 00:13:21.794 "superblock": true, 00:13:21.794 "num_base_bdevs": 2, 00:13:21.794 "num_base_bdevs_discovered": 1, 00:13:21.794 "num_base_bdevs_operational": 1, 00:13:21.794 "base_bdevs_list": [ 00:13:21.794 { 00:13:21.794 "name": null, 00:13:21.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.794 "is_configured": false, 00:13:21.794 "data_offset": 0, 00:13:21.794 "data_size": 63488 00:13:21.794 }, 00:13:21.794 { 00:13:21.794 "name": "BaseBdev2", 00:13:21.794 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:21.794 "is_configured": true, 00:13:21.794 "data_offset": 2048, 00:13:21.794 "data_size": 63488 00:13:21.794 } 00:13:21.794 ] 00:13:21.794 }' 00:13:21.794 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.794 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.054 21:44:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.054 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.054 "name": "raid_bdev1", 00:13:22.054 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:22.054 "strip_size_kb": 0, 00:13:22.054 "state": "online", 00:13:22.054 "raid_level": "raid1", 00:13:22.054 "superblock": true, 00:13:22.054 "num_base_bdevs": 2, 00:13:22.054 "num_base_bdevs_discovered": 1, 00:13:22.054 "num_base_bdevs_operational": 1, 00:13:22.054 "base_bdevs_list": [ 00:13:22.054 { 00:13:22.054 "name": null, 00:13:22.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.054 "is_configured": false, 00:13:22.054 "data_offset": 0, 00:13:22.054 "data_size": 63488 00:13:22.054 }, 00:13:22.054 { 00:13:22.054 "name": "BaseBdev2", 00:13:22.054 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:22.054 "is_configured": true, 00:13:22.054 "data_offset": 2048, 00:13:22.054 "data_size": 63488 00:13:22.054 } 00:13:22.054 ] 00:13:22.054 }' 00:13:22.054 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.314 [2024-09-29 21:44:41.122513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.314 [2024-09-29 21:44:41.122702] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:22.314 [2024-09-29 21:44:41.122720] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:22.314 request: 00:13:22.314 { 00:13:22.314 "base_bdev": "BaseBdev1", 00:13:22.314 "raid_bdev": "raid_bdev1", 00:13:22.314 "method": "bdev_raid_add_base_bdev", 00:13:22.314 "req_id": 1 00:13:22.314 } 00:13:22.314 Got JSON-RPC error response 00:13:22.314 response: 00:13:22.314 { 00:13:22.314 "code": -22, 00:13:22.314 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:22.314 } 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.314 21:44:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.253 "name": "raid_bdev1", 00:13:23.253 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:23.253 "strip_size_kb": 0, 00:13:23.253 "state": "online", 00:13:23.253 "raid_level": "raid1", 00:13:23.253 "superblock": true, 00:13:23.253 "num_base_bdevs": 2, 00:13:23.253 "num_base_bdevs_discovered": 1, 00:13:23.253 "num_base_bdevs_operational": 1, 00:13:23.253 "base_bdevs_list": [ 00:13:23.253 { 00:13:23.253 "name": null, 00:13:23.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.253 "is_configured": false, 00:13:23.253 "data_offset": 0, 00:13:23.253 "data_size": 63488 00:13:23.253 }, 00:13:23.253 { 00:13:23.253 "name": "BaseBdev2", 00:13:23.253 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:23.253 "is_configured": true, 00:13:23.253 "data_offset": 2048, 00:13:23.253 "data_size": 63488 00:13:23.253 } 00:13:23.253 ] 00:13:23.253 }' 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.253 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.822 "name": "raid_bdev1", 00:13:23.822 "uuid": "b411d499-3c80-4162-8e81-23447d904824", 00:13:23.822 "strip_size_kb": 0, 00:13:23.822 "state": "online", 00:13:23.822 "raid_level": "raid1", 00:13:23.822 "superblock": true, 00:13:23.822 "num_base_bdevs": 2, 00:13:23.822 "num_base_bdevs_discovered": 1, 00:13:23.822 "num_base_bdevs_operational": 1, 00:13:23.822 "base_bdevs_list": [ 00:13:23.822 { 00:13:23.822 "name": null, 00:13:23.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.822 "is_configured": false, 00:13:23.822 "data_offset": 0, 00:13:23.822 "data_size": 63488 00:13:23.822 }, 00:13:23.822 { 00:13:23.822 "name": "BaseBdev2", 00:13:23.822 "uuid": "58070723-eb70-5e7f-82a5-01774f1ca314", 00:13:23.822 "is_configured": true, 00:13:23.822 "data_offset": 2048, 00:13:23.822 "data_size": 63488 00:13:23.822 } 00:13:23.822 ] 00:13:23.822 }' 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76914 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 76914 ']' 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 76914 00:13:23.822 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:13:23.823 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:23.823 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76914 00:13:24.082 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.082 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.082 killing process with pid 76914 00:13:24.082 Received shutdown signal, test time was about 17.111347 seconds 00:13:24.082 00:13:24.082 Latency(us) 00:13:24.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.082 =================================================================================================================== 00:13:24.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:24.082 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76914' 00:13:24.082 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 76914 00:13:24.082 [2024-09-29 21:44:42.819385] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:24.082 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 76914 00:13:24.082 [2024-09-29 21:44:42.819531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.082 [2024-09-29 21:44:42.819592] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.082 [2024-09-29 21:44:42.819607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:24.082 [2024-09-29 21:44:43.041423] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:25.465 00:13:25.465 real 0m20.439s 00:13:25.465 user 0m26.621s 00:13:25.465 sys 0m2.351s 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.465 ************************************ 00:13:25.465 END TEST raid_rebuild_test_sb_io 00:13:25.465 ************************************ 00:13:25.465 21:44:44 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:25.465 21:44:44 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:25.465 21:44:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:25.465 21:44:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.465 21:44:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.465 ************************************ 00:13:25.465 START TEST raid_rebuild_test 00:13:25.465 ************************************ 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.465 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77608 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77608 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77608 ']' 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.726 21:44:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.726 [2024-09-29 21:44:44.547870] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:25.726 [2024-09-29 21:44:44.548093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77608 ] 00:13:25.726 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:25.726 Zero copy mechanism will not be used. 00:13:25.986 [2024-09-29 21:44:44.710456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.986 [2024-09-29 21:44:44.964815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.246 [2024-09-29 21:44:45.191642] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.246 [2024-09-29 21:44:45.191754] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.507 BaseBdev1_malloc 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.507 [2024-09-29 21:44:45.427804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:26.507 [2024-09-29 21:44:45.427878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.507 [2024-09-29 21:44:45.427901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:26.507 [2024-09-29 21:44:45.427918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.507 [2024-09-29 21:44:45.430380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.507 [2024-09-29 21:44:45.430419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:26.507 BaseBdev1 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.507 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 BaseBdev2_malloc 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 [2024-09-29 21:44:45.515160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:26.768 [2024-09-29 21:44:45.515225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.768 [2024-09-29 21:44:45.515247] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:26.768 [2024-09-29 21:44:45.515261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.768 [2024-09-29 21:44:45.517677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.768 [2024-09-29 21:44:45.517718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:26.768 BaseBdev2 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 BaseBdev3_malloc 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 [2024-09-29 21:44:45.575727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:26.768 [2024-09-29 21:44:45.575854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.768 [2024-09-29 21:44:45.575895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:26.768 [2024-09-29 21:44:45.575926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.768 [2024-09-29 21:44:45.578279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.768 [2024-09-29 21:44:45.578370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:26.768 BaseBdev3 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 BaseBdev4_malloc 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 [2024-09-29 21:44:45.635962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:26.768 [2024-09-29 21:44:45.636094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.768 [2024-09-29 21:44:45.636132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:26.768 [2024-09-29 21:44:45.636187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.768 [2024-09-29 21:44:45.638491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.768 [2024-09-29 21:44:45.638579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:26.768 BaseBdev4 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 spare_malloc 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 spare_delay 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 [2024-09-29 21:44:45.707753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:26.768 [2024-09-29 21:44:45.707812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.768 [2024-09-29 21:44:45.707831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:26.768 [2024-09-29 21:44:45.707842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.768 [2024-09-29 21:44:45.710240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.768 [2024-09-29 21:44:45.710354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:26.768 spare 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.768 [2024-09-29 21:44:45.719792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.768 [2024-09-29 21:44:45.721833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.768 [2024-09-29 21:44:45.721903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.768 [2024-09-29 21:44:45.721957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.768 [2024-09-29 21:44:45.722034] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:26.768 [2024-09-29 21:44:45.722056] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:26.768 [2024-09-29 21:44:45.722316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:26.768 [2024-09-29 21:44:45.722480] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:26.768 [2024-09-29 21:44:45.722492] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:26.768 [2024-09-29 21:44:45.722639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.768 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.769 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.769 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.769 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.769 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.769 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.769 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.769 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.029 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.029 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.029 "name": "raid_bdev1", 00:13:27.029 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:27.029 "strip_size_kb": 0, 00:13:27.029 "state": "online", 00:13:27.029 "raid_level": "raid1", 00:13:27.029 "superblock": false, 00:13:27.029 "num_base_bdevs": 4, 00:13:27.029 "num_base_bdevs_discovered": 4, 00:13:27.029 "num_base_bdevs_operational": 4, 00:13:27.029 "base_bdevs_list": [ 00:13:27.029 { 00:13:27.029 "name": "BaseBdev1", 00:13:27.029 "uuid": "3b7701c0-3381-5146-898c-8d06ba575729", 00:13:27.029 "is_configured": true, 00:13:27.029 "data_offset": 0, 00:13:27.029 "data_size": 65536 00:13:27.029 }, 00:13:27.029 { 00:13:27.029 "name": "BaseBdev2", 00:13:27.029 "uuid": "d51a09aa-c12f-557a-8e07-7c62a4985fab", 00:13:27.029 "is_configured": true, 00:13:27.029 "data_offset": 0, 00:13:27.029 "data_size": 65536 00:13:27.029 }, 00:13:27.029 { 00:13:27.029 "name": "BaseBdev3", 00:13:27.029 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:27.029 "is_configured": true, 00:13:27.029 "data_offset": 0, 00:13:27.029 "data_size": 65536 00:13:27.029 }, 00:13:27.029 { 00:13:27.029 "name": "BaseBdev4", 00:13:27.029 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:27.029 "is_configured": true, 00:13:27.029 "data_offset": 0, 00:13:27.029 "data_size": 65536 00:13:27.029 } 00:13:27.029 ] 00:13:27.029 }' 00:13:27.029 21:44:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.029 21:44:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.289 [2024-09-29 21:44:46.223247] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.289 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.549 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:27.549 [2024-09-29 21:44:46.486521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:27.549 /dev/nbd0 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.808 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.809 1+0 records in 00:13:27.809 1+0 records out 00:13:27.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401113 s, 10.2 MB/s 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:27.809 21:44:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:34.397 65536+0 records in 00:13:34.397 65536+0 records out 00:13:34.397 33554432 bytes (34 MB, 32 MiB) copied, 5.5735 s, 6.0 MB/s 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.397 [2024-09-29 21:44:52.355346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.397 [2024-09-29 21:44:52.371418] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.397 "name": "raid_bdev1", 00:13:34.397 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:34.397 "strip_size_kb": 0, 00:13:34.397 "state": "online", 00:13:34.397 "raid_level": "raid1", 00:13:34.397 "superblock": false, 00:13:34.397 "num_base_bdevs": 4, 00:13:34.397 "num_base_bdevs_discovered": 3, 00:13:34.397 "num_base_bdevs_operational": 3, 00:13:34.397 "base_bdevs_list": [ 00:13:34.397 { 00:13:34.397 "name": null, 00:13:34.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.397 "is_configured": false, 00:13:34.397 "data_offset": 0, 00:13:34.397 "data_size": 65536 00:13:34.397 }, 00:13:34.397 { 00:13:34.397 "name": "BaseBdev2", 00:13:34.397 "uuid": "d51a09aa-c12f-557a-8e07-7c62a4985fab", 00:13:34.397 "is_configured": true, 00:13:34.397 "data_offset": 0, 00:13:34.397 "data_size": 65536 00:13:34.397 }, 00:13:34.397 { 00:13:34.397 "name": "BaseBdev3", 00:13:34.397 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:34.397 "is_configured": true, 00:13:34.397 "data_offset": 0, 00:13:34.397 "data_size": 65536 00:13:34.397 }, 00:13:34.397 { 00:13:34.397 "name": "BaseBdev4", 00:13:34.397 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:34.397 "is_configured": true, 00:13:34.397 "data_offset": 0, 00:13:34.397 "data_size": 65536 00:13:34.397 } 00:13:34.397 ] 00:13:34.397 }' 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.397 [2024-09-29 21:44:52.862594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.397 [2024-09-29 21:44:52.878551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.397 21:44:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:34.397 [2024-09-29 21:44:52.880512] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.967 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.967 "name": "raid_bdev1", 00:13:34.967 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:34.967 "strip_size_kb": 0, 00:13:34.967 "state": "online", 00:13:34.967 "raid_level": "raid1", 00:13:34.967 "superblock": false, 00:13:34.967 "num_base_bdevs": 4, 00:13:34.967 "num_base_bdevs_discovered": 4, 00:13:34.967 "num_base_bdevs_operational": 4, 00:13:34.967 "process": { 00:13:34.967 "type": "rebuild", 00:13:34.967 "target": "spare", 00:13:34.967 "progress": { 00:13:34.967 "blocks": 20480, 00:13:34.967 "percent": 31 00:13:34.967 } 00:13:34.967 }, 00:13:34.967 "base_bdevs_list": [ 00:13:34.967 { 00:13:34.967 "name": "spare", 00:13:34.967 "uuid": "d9a81610-86af-5d9d-90c7-0b0d0b249ff3", 00:13:34.967 "is_configured": true, 00:13:34.967 "data_offset": 0, 00:13:34.967 "data_size": 65536 00:13:34.967 }, 00:13:34.967 { 00:13:34.967 "name": "BaseBdev2", 00:13:34.968 "uuid": "d51a09aa-c12f-557a-8e07-7c62a4985fab", 00:13:34.968 "is_configured": true, 00:13:34.968 "data_offset": 0, 00:13:34.968 "data_size": 65536 00:13:34.968 }, 00:13:34.968 { 00:13:34.968 "name": "BaseBdev3", 00:13:34.968 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:34.968 "is_configured": true, 00:13:34.968 "data_offset": 0, 00:13:34.968 "data_size": 65536 00:13:34.968 }, 00:13:34.968 { 00:13:34.968 "name": "BaseBdev4", 00:13:34.968 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:34.968 "is_configured": true, 00:13:34.968 "data_offset": 0, 00:13:34.968 "data_size": 65536 00:13:34.968 } 00:13:34.968 ] 00:13:34.968 }' 00:13:34.968 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.227 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.227 21:44:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.227 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.227 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:35.227 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.227 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.227 [2024-09-29 21:44:54.020494] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.227 [2024-09-29 21:44:54.086058] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.227 [2024-09-29 21:44:54.086119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.228 [2024-09-29 21:44:54.086135] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.228 [2024-09-29 21:44:54.086143] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.228 "name": "raid_bdev1", 00:13:35.228 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:35.228 "strip_size_kb": 0, 00:13:35.228 "state": "online", 00:13:35.228 "raid_level": "raid1", 00:13:35.228 "superblock": false, 00:13:35.228 "num_base_bdevs": 4, 00:13:35.228 "num_base_bdevs_discovered": 3, 00:13:35.228 "num_base_bdevs_operational": 3, 00:13:35.228 "base_bdevs_list": [ 00:13:35.228 { 00:13:35.228 "name": null, 00:13:35.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.228 "is_configured": false, 00:13:35.228 "data_offset": 0, 00:13:35.228 "data_size": 65536 00:13:35.228 }, 00:13:35.228 { 00:13:35.228 "name": "BaseBdev2", 00:13:35.228 "uuid": "d51a09aa-c12f-557a-8e07-7c62a4985fab", 00:13:35.228 "is_configured": true, 00:13:35.228 "data_offset": 0, 00:13:35.228 "data_size": 65536 00:13:35.228 }, 00:13:35.228 { 00:13:35.228 "name": "BaseBdev3", 00:13:35.228 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:35.228 "is_configured": true, 00:13:35.228 "data_offset": 0, 00:13:35.228 "data_size": 65536 00:13:35.228 }, 00:13:35.228 { 00:13:35.228 "name": "BaseBdev4", 00:13:35.228 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:35.228 "is_configured": true, 00:13:35.228 "data_offset": 0, 00:13:35.228 "data_size": 65536 00:13:35.228 } 00:13:35.228 ] 00:13:35.228 }' 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.228 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.798 "name": "raid_bdev1", 00:13:35.798 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:35.798 "strip_size_kb": 0, 00:13:35.798 "state": "online", 00:13:35.798 "raid_level": "raid1", 00:13:35.798 "superblock": false, 00:13:35.798 "num_base_bdevs": 4, 00:13:35.798 "num_base_bdevs_discovered": 3, 00:13:35.798 "num_base_bdevs_operational": 3, 00:13:35.798 "base_bdevs_list": [ 00:13:35.798 { 00:13:35.798 "name": null, 00:13:35.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.798 "is_configured": false, 00:13:35.798 "data_offset": 0, 00:13:35.798 "data_size": 65536 00:13:35.798 }, 00:13:35.798 { 00:13:35.798 "name": "BaseBdev2", 00:13:35.798 "uuid": "d51a09aa-c12f-557a-8e07-7c62a4985fab", 00:13:35.798 "is_configured": true, 00:13:35.798 "data_offset": 0, 00:13:35.798 "data_size": 65536 00:13:35.798 }, 00:13:35.798 { 00:13:35.798 "name": "BaseBdev3", 00:13:35.798 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:35.798 "is_configured": true, 00:13:35.798 "data_offset": 0, 00:13:35.798 "data_size": 65536 00:13:35.798 }, 00:13:35.798 { 00:13:35.798 "name": "BaseBdev4", 00:13:35.798 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:35.798 "is_configured": true, 00:13:35.798 "data_offset": 0, 00:13:35.798 "data_size": 65536 00:13:35.798 } 00:13:35.798 ] 00:13:35.798 }' 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.798 [2024-09-29 21:44:54.704461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.798 [2024-09-29 21:44:54.717698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.798 21:44:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:35.798 [2024-09-29 21:44:54.719406] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.180 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.180 "name": "raid_bdev1", 00:13:37.180 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:37.180 "strip_size_kb": 0, 00:13:37.180 "state": "online", 00:13:37.180 "raid_level": "raid1", 00:13:37.180 "superblock": false, 00:13:37.180 "num_base_bdevs": 4, 00:13:37.181 "num_base_bdevs_discovered": 4, 00:13:37.181 "num_base_bdevs_operational": 4, 00:13:37.181 "process": { 00:13:37.181 "type": "rebuild", 00:13:37.181 "target": "spare", 00:13:37.181 "progress": { 00:13:37.181 "blocks": 20480, 00:13:37.181 "percent": 31 00:13:37.181 } 00:13:37.181 }, 00:13:37.181 "base_bdevs_list": [ 00:13:37.181 { 00:13:37.181 "name": "spare", 00:13:37.181 "uuid": "d9a81610-86af-5d9d-90c7-0b0d0b249ff3", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 }, 00:13:37.181 { 00:13:37.181 "name": "BaseBdev2", 00:13:37.181 "uuid": "d51a09aa-c12f-557a-8e07-7c62a4985fab", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 }, 00:13:37.181 { 00:13:37.181 "name": "BaseBdev3", 00:13:37.181 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 }, 00:13:37.181 { 00:13:37.181 "name": "BaseBdev4", 00:13:37.181 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 } 00:13:37.181 ] 00:13:37.181 }' 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.181 [2024-09-29 21:44:55.880148] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.181 [2024-09-29 21:44:55.924011] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.181 "name": "raid_bdev1", 00:13:37.181 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:37.181 "strip_size_kb": 0, 00:13:37.181 "state": "online", 00:13:37.181 "raid_level": "raid1", 00:13:37.181 "superblock": false, 00:13:37.181 "num_base_bdevs": 4, 00:13:37.181 "num_base_bdevs_discovered": 3, 00:13:37.181 "num_base_bdevs_operational": 3, 00:13:37.181 "process": { 00:13:37.181 "type": "rebuild", 00:13:37.181 "target": "spare", 00:13:37.181 "progress": { 00:13:37.181 "blocks": 24576, 00:13:37.181 "percent": 37 00:13:37.181 } 00:13:37.181 }, 00:13:37.181 "base_bdevs_list": [ 00:13:37.181 { 00:13:37.181 "name": "spare", 00:13:37.181 "uuid": "d9a81610-86af-5d9d-90c7-0b0d0b249ff3", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 }, 00:13:37.181 { 00:13:37.181 "name": null, 00:13:37.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.181 "is_configured": false, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 }, 00:13:37.181 { 00:13:37.181 "name": "BaseBdev3", 00:13:37.181 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 }, 00:13:37.181 { 00:13:37.181 "name": "BaseBdev4", 00:13:37.181 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 } 00:13:37.181 ] 00:13:37.181 }' 00:13:37.181 21:44:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=452 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.181 "name": "raid_bdev1", 00:13:37.181 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:37.181 "strip_size_kb": 0, 00:13:37.181 "state": "online", 00:13:37.181 "raid_level": "raid1", 00:13:37.181 "superblock": false, 00:13:37.181 "num_base_bdevs": 4, 00:13:37.181 "num_base_bdevs_discovered": 3, 00:13:37.181 "num_base_bdevs_operational": 3, 00:13:37.181 "process": { 00:13:37.181 "type": "rebuild", 00:13:37.181 "target": "spare", 00:13:37.181 "progress": { 00:13:37.181 "blocks": 26624, 00:13:37.181 "percent": 40 00:13:37.181 } 00:13:37.181 }, 00:13:37.181 "base_bdevs_list": [ 00:13:37.181 { 00:13:37.181 "name": "spare", 00:13:37.181 "uuid": "d9a81610-86af-5d9d-90c7-0b0d0b249ff3", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 }, 00:13:37.181 { 00:13:37.181 "name": null, 00:13:37.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.181 "is_configured": false, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 }, 00:13:37.181 { 00:13:37.181 "name": "BaseBdev3", 00:13:37.181 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 }, 00:13:37.181 { 00:13:37.181 "name": "BaseBdev4", 00:13:37.181 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:37.181 "is_configured": true, 00:13:37.181 "data_offset": 0, 00:13:37.181 "data_size": 65536 00:13:37.181 } 00:13:37.181 ] 00:13:37.181 }' 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.181 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.441 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.441 21:44:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.381 "name": "raid_bdev1", 00:13:38.381 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:38.381 "strip_size_kb": 0, 00:13:38.381 "state": "online", 00:13:38.381 "raid_level": "raid1", 00:13:38.381 "superblock": false, 00:13:38.381 "num_base_bdevs": 4, 00:13:38.381 "num_base_bdevs_discovered": 3, 00:13:38.381 "num_base_bdevs_operational": 3, 00:13:38.381 "process": { 00:13:38.381 "type": "rebuild", 00:13:38.381 "target": "spare", 00:13:38.381 "progress": { 00:13:38.381 "blocks": 49152, 00:13:38.381 "percent": 75 00:13:38.381 } 00:13:38.381 }, 00:13:38.381 "base_bdevs_list": [ 00:13:38.381 { 00:13:38.381 "name": "spare", 00:13:38.381 "uuid": "d9a81610-86af-5d9d-90c7-0b0d0b249ff3", 00:13:38.381 "is_configured": true, 00:13:38.381 "data_offset": 0, 00:13:38.381 "data_size": 65536 00:13:38.381 }, 00:13:38.381 { 00:13:38.381 "name": null, 00:13:38.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.381 "is_configured": false, 00:13:38.381 "data_offset": 0, 00:13:38.381 "data_size": 65536 00:13:38.381 }, 00:13:38.381 { 00:13:38.381 "name": "BaseBdev3", 00:13:38.381 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:38.381 "is_configured": true, 00:13:38.381 "data_offset": 0, 00:13:38.381 "data_size": 65536 00:13:38.381 }, 00:13:38.381 { 00:13:38.381 "name": "BaseBdev4", 00:13:38.381 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:38.381 "is_configured": true, 00:13:38.381 "data_offset": 0, 00:13:38.381 "data_size": 65536 00:13:38.381 } 00:13:38.381 ] 00:13:38.381 }' 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.381 21:44:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.950 [2024-09-29 21:44:57.931431] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:38.950 [2024-09-29 21:44:57.931498] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:38.950 [2024-09-29 21:44:57.931539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.519 "name": "raid_bdev1", 00:13:39.519 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:39.519 "strip_size_kb": 0, 00:13:39.519 "state": "online", 00:13:39.519 "raid_level": "raid1", 00:13:39.519 "superblock": false, 00:13:39.519 "num_base_bdevs": 4, 00:13:39.519 "num_base_bdevs_discovered": 3, 00:13:39.519 "num_base_bdevs_operational": 3, 00:13:39.519 "base_bdevs_list": [ 00:13:39.519 { 00:13:39.519 "name": "spare", 00:13:39.519 "uuid": "d9a81610-86af-5d9d-90c7-0b0d0b249ff3", 00:13:39.519 "is_configured": true, 00:13:39.519 "data_offset": 0, 00:13:39.519 "data_size": 65536 00:13:39.519 }, 00:13:39.519 { 00:13:39.519 "name": null, 00:13:39.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.519 "is_configured": false, 00:13:39.519 "data_offset": 0, 00:13:39.519 "data_size": 65536 00:13:39.519 }, 00:13:39.519 { 00:13:39.519 "name": "BaseBdev3", 00:13:39.519 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:39.519 "is_configured": true, 00:13:39.519 "data_offset": 0, 00:13:39.519 "data_size": 65536 00:13:39.519 }, 00:13:39.519 { 00:13:39.519 "name": "BaseBdev4", 00:13:39.519 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:39.519 "is_configured": true, 00:13:39.519 "data_offset": 0, 00:13:39.519 "data_size": 65536 00:13:39.519 } 00:13:39.519 ] 00:13:39.519 }' 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.519 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.520 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.520 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.520 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.520 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.520 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.520 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.520 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.780 "name": "raid_bdev1", 00:13:39.780 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:39.780 "strip_size_kb": 0, 00:13:39.780 "state": "online", 00:13:39.780 "raid_level": "raid1", 00:13:39.780 "superblock": false, 00:13:39.780 "num_base_bdevs": 4, 00:13:39.780 "num_base_bdevs_discovered": 3, 00:13:39.780 "num_base_bdevs_operational": 3, 00:13:39.780 "base_bdevs_list": [ 00:13:39.780 { 00:13:39.780 "name": "spare", 00:13:39.780 "uuid": "d9a81610-86af-5d9d-90c7-0b0d0b249ff3", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 0, 00:13:39.780 "data_size": 65536 00:13:39.780 }, 00:13:39.780 { 00:13:39.780 "name": null, 00:13:39.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.780 "is_configured": false, 00:13:39.780 "data_offset": 0, 00:13:39.780 "data_size": 65536 00:13:39.780 }, 00:13:39.780 { 00:13:39.780 "name": "BaseBdev3", 00:13:39.780 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 0, 00:13:39.780 "data_size": 65536 00:13:39.780 }, 00:13:39.780 { 00:13:39.780 "name": "BaseBdev4", 00:13:39.780 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 0, 00:13:39.780 "data_size": 65536 00:13:39.780 } 00:13:39.780 ] 00:13:39.780 }' 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.780 "name": "raid_bdev1", 00:13:39.780 "uuid": "745fe3be-6881-40de-a856-7487a569d0d8", 00:13:39.780 "strip_size_kb": 0, 00:13:39.780 "state": "online", 00:13:39.780 "raid_level": "raid1", 00:13:39.780 "superblock": false, 00:13:39.780 "num_base_bdevs": 4, 00:13:39.780 "num_base_bdevs_discovered": 3, 00:13:39.780 "num_base_bdevs_operational": 3, 00:13:39.780 "base_bdevs_list": [ 00:13:39.780 { 00:13:39.780 "name": "spare", 00:13:39.780 "uuid": "d9a81610-86af-5d9d-90c7-0b0d0b249ff3", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 0, 00:13:39.780 "data_size": 65536 00:13:39.780 }, 00:13:39.780 { 00:13:39.780 "name": null, 00:13:39.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.780 "is_configured": false, 00:13:39.780 "data_offset": 0, 00:13:39.780 "data_size": 65536 00:13:39.780 }, 00:13:39.780 { 00:13:39.780 "name": "BaseBdev3", 00:13:39.780 "uuid": "08ef5474-a9df-5d6b-a350-b8eedb05ee19", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 0, 00:13:39.780 "data_size": 65536 00:13:39.780 }, 00:13:39.780 { 00:13:39.780 "name": "BaseBdev4", 00:13:39.780 "uuid": "4a6f876e-c027-51bf-8bfe-ce2c099940d8", 00:13:39.780 "is_configured": true, 00:13:39.780 "data_offset": 0, 00:13:39.780 "data_size": 65536 00:13:39.780 } 00:13:39.780 ] 00:13:39.780 }' 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.780 21:44:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.350 [2024-09-29 21:44:59.104214] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.350 [2024-09-29 21:44:59.104294] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.350 [2024-09-29 21:44:59.104381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.350 [2024-09-29 21:44:59.104471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.350 [2024-09-29 21:44:59.104517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.350 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:40.609 /dev/nbd0 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.609 1+0 records in 00:13:40.609 1+0 records out 00:13:40.609 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243064 s, 16.9 MB/s 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.609 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:40.869 /dev/nbd1 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:40.869 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.869 1+0 records in 00:13:40.870 1+0 records out 00:13:40.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276464 s, 14.8 MB/s 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.870 21:44:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.130 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77608 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77608 ']' 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77608 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77608 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77608' 00:13:41.390 killing process with pid 77608 00:13:41.390 Received shutdown signal, test time was about 60.000000 seconds 00:13:41.390 00:13:41.390 Latency(us) 00:13:41.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.390 =================================================================================================================== 00:13:41.390 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77608 00:13:41.390 [2024-09-29 21:45:00.319819] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:41.390 21:45:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77608 00:13:41.960 [2024-09-29 21:45:00.771948] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.340 21:45:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:43.340 00:13:43.340 real 0m17.493s 00:13:43.340 user 0m19.571s 00:13:43.340 sys 0m3.428s 00:13:43.340 ************************************ 00:13:43.340 END TEST raid_rebuild_test 00:13:43.340 ************************************ 00:13:43.340 21:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.340 21:45:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.340 21:45:01 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:43.340 21:45:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:43.340 21:45:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.340 21:45:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:43.340 ************************************ 00:13:43.340 START TEST raid_rebuild_test_sb 00:13:43.340 ************************************ 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78053 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78053 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78053 ']' 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.340 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.340 [2024-09-29 21:45:02.120748] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:43.340 [2024-09-29 21:45:02.120960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:43.340 Zero copy mechanism will not be used. 00:13:43.340 -allocations --file-prefix=spdk_pid78053 ] 00:13:43.340 [2024-09-29 21:45:02.287611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.599 [2024-09-29 21:45:02.476184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.863 [2024-09-29 21:45:02.648891] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:43.863 [2024-09-29 21:45:02.648930] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.124 BaseBdev1_malloc 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.124 [2024-09-29 21:45:02.971936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:44.124 [2024-09-29 21:45:02.972116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.124 [2024-09-29 21:45:02.972166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:44.124 [2024-09-29 21:45:02.972200] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.124 [2024-09-29 21:45:02.974174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.124 [2024-09-29 21:45:02.974244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:44.124 BaseBdev1 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.124 21:45:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.124 BaseBdev2_malloc 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.124 [2024-09-29 21:45:03.035483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:44.124 [2024-09-29 21:45:03.035546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.124 [2024-09-29 21:45:03.035566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:44.124 [2024-09-29 21:45:03.035576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.124 [2024-09-29 21:45:03.037466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.124 [2024-09-29 21:45:03.037509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:44.124 BaseBdev2 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.124 BaseBdev3_malloc 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.124 [2024-09-29 21:45:03.088388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:44.124 [2024-09-29 21:45:03.088443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.124 [2024-09-29 21:45:03.088466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:44.124 [2024-09-29 21:45:03.088477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.124 [2024-09-29 21:45:03.090410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.124 [2024-09-29 21:45:03.090520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:44.124 BaseBdev3 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.124 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.384 BaseBdev4_malloc 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.384 [2024-09-29 21:45:03.142314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:44.384 [2024-09-29 21:45:03.142371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.384 [2024-09-29 21:45:03.142406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:44.384 [2024-09-29 21:45:03.142417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.384 [2024-09-29 21:45:03.144365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.384 [2024-09-29 21:45:03.144407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:44.384 BaseBdev4 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.384 spare_malloc 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.384 spare_delay 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.384 [2024-09-29 21:45:03.206846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:44.384 [2024-09-29 21:45:03.206963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.384 [2024-09-29 21:45:03.207000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:44.384 [2024-09-29 21:45:03.207037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.384 [2024-09-29 21:45:03.208898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.384 [2024-09-29 21:45:03.208971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:44.384 spare 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.384 [2024-09-29 21:45:03.218884] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:44.384 [2024-09-29 21:45:03.220515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.384 [2024-09-29 21:45:03.220619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.384 [2024-09-29 21:45:03.220673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:44.384 [2024-09-29 21:45:03.220841] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:44.384 [2024-09-29 21:45:03.220854] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:44.384 [2024-09-29 21:45:03.221094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:44.384 [2024-09-29 21:45:03.221248] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:44.384 [2024-09-29 21:45:03.221257] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:44.384 [2024-09-29 21:45:03.221392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.384 "name": "raid_bdev1", 00:13:44.384 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:44.384 "strip_size_kb": 0, 00:13:44.384 "state": "online", 00:13:44.384 "raid_level": "raid1", 00:13:44.384 "superblock": true, 00:13:44.384 "num_base_bdevs": 4, 00:13:44.384 "num_base_bdevs_discovered": 4, 00:13:44.384 "num_base_bdevs_operational": 4, 00:13:44.384 "base_bdevs_list": [ 00:13:44.384 { 00:13:44.384 "name": "BaseBdev1", 00:13:44.384 "uuid": "d8af2761-3793-55cc-9ffe-1ea3153348e7", 00:13:44.384 "is_configured": true, 00:13:44.384 "data_offset": 2048, 00:13:44.384 "data_size": 63488 00:13:44.384 }, 00:13:44.384 { 00:13:44.384 "name": "BaseBdev2", 00:13:44.384 "uuid": "61c95e14-b68d-592e-a32a-983975a605b5", 00:13:44.384 "is_configured": true, 00:13:44.384 "data_offset": 2048, 00:13:44.384 "data_size": 63488 00:13:44.384 }, 00:13:44.384 { 00:13:44.384 "name": "BaseBdev3", 00:13:44.384 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:44.384 "is_configured": true, 00:13:44.384 "data_offset": 2048, 00:13:44.384 "data_size": 63488 00:13:44.384 }, 00:13:44.384 { 00:13:44.384 "name": "BaseBdev4", 00:13:44.384 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:44.384 "is_configured": true, 00:13:44.384 "data_offset": 2048, 00:13:44.384 "data_size": 63488 00:13:44.384 } 00:13:44.384 ] 00:13:44.384 }' 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.384 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.953 [2024-09-29 21:45:03.654383] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.953 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:44.953 [2024-09-29 21:45:03.929661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:45.212 /dev/nbd0 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:45.212 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:45.212 1+0 records in 00:13:45.212 1+0 records out 00:13:45.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514377 s, 8.0 MB/s 00:13:45.213 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.213 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:45.213 21:45:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:45.213 21:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:45.213 21:45:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:45.213 21:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:45.213 21:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:45.213 21:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:45.213 21:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:45.213 21:45:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:50.488 63488+0 records in 00:13:50.488 63488+0 records out 00:13:50.488 32505856 bytes (33 MB, 31 MiB) copied, 5.26027 s, 6.2 MB/s 00:13:50.488 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:50.488 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:50.488 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.488 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.488 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:50.488 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.488 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:50.748 [2024-09-29 21:45:09.478262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.748 [2024-09-29 21:45:09.494329] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.748 "name": "raid_bdev1", 00:13:50.748 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:50.748 "strip_size_kb": 0, 00:13:50.748 "state": "online", 00:13:50.748 "raid_level": "raid1", 00:13:50.748 "superblock": true, 00:13:50.748 "num_base_bdevs": 4, 00:13:50.748 "num_base_bdevs_discovered": 3, 00:13:50.748 "num_base_bdevs_operational": 3, 00:13:50.748 "base_bdevs_list": [ 00:13:50.748 { 00:13:50.748 "name": null, 00:13:50.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.748 "is_configured": false, 00:13:50.748 "data_offset": 0, 00:13:50.748 "data_size": 63488 00:13:50.748 }, 00:13:50.748 { 00:13:50.748 "name": "BaseBdev2", 00:13:50.748 "uuid": "61c95e14-b68d-592e-a32a-983975a605b5", 00:13:50.748 "is_configured": true, 00:13:50.748 "data_offset": 2048, 00:13:50.748 "data_size": 63488 00:13:50.748 }, 00:13:50.748 { 00:13:50.748 "name": "BaseBdev3", 00:13:50.748 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:50.748 "is_configured": true, 00:13:50.748 "data_offset": 2048, 00:13:50.748 "data_size": 63488 00:13:50.748 }, 00:13:50.748 { 00:13:50.748 "name": "BaseBdev4", 00:13:50.748 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:50.748 "is_configured": true, 00:13:50.748 "data_offset": 2048, 00:13:50.748 "data_size": 63488 00:13:50.748 } 00:13:50.748 ] 00:13:50.748 }' 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.748 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.008 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.008 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.008 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.008 [2024-09-29 21:45:09.981472] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.267 [2024-09-29 21:45:09.994502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:51.267 21:45:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.267 21:45:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:51.267 [2024-09-29 21:45:09.996292] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:52.207 21:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.207 21:45:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.207 "name": "raid_bdev1", 00:13:52.207 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:52.207 "strip_size_kb": 0, 00:13:52.207 "state": "online", 00:13:52.207 "raid_level": "raid1", 00:13:52.207 "superblock": true, 00:13:52.207 "num_base_bdevs": 4, 00:13:52.207 "num_base_bdevs_discovered": 4, 00:13:52.207 "num_base_bdevs_operational": 4, 00:13:52.207 "process": { 00:13:52.207 "type": "rebuild", 00:13:52.207 "target": "spare", 00:13:52.207 "progress": { 00:13:52.207 "blocks": 20480, 00:13:52.207 "percent": 32 00:13:52.207 } 00:13:52.207 }, 00:13:52.207 "base_bdevs_list": [ 00:13:52.207 { 00:13:52.207 "name": "spare", 00:13:52.207 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:52.207 "is_configured": true, 00:13:52.207 "data_offset": 2048, 00:13:52.207 "data_size": 63488 00:13:52.207 }, 00:13:52.207 { 00:13:52.207 "name": "BaseBdev2", 00:13:52.207 "uuid": "61c95e14-b68d-592e-a32a-983975a605b5", 00:13:52.207 "is_configured": true, 00:13:52.207 "data_offset": 2048, 00:13:52.207 "data_size": 63488 00:13:52.207 }, 00:13:52.207 { 00:13:52.207 "name": "BaseBdev3", 00:13:52.207 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:52.207 "is_configured": true, 00:13:52.207 "data_offset": 2048, 00:13:52.207 "data_size": 63488 00:13:52.207 }, 00:13:52.207 { 00:13:52.207 "name": "BaseBdev4", 00:13:52.207 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:52.207 "is_configured": true, 00:13:52.207 "data_offset": 2048, 00:13:52.207 "data_size": 63488 00:13:52.207 } 00:13:52.207 ] 00:13:52.207 }' 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.207 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.207 [2024-09-29 21:45:11.136521] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.468 [2024-09-29 21:45:11.200911] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:52.468 [2024-09-29 21:45:11.201023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.468 [2024-09-29 21:45:11.201070] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.468 [2024-09-29 21:45:11.201094] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.468 "name": "raid_bdev1", 00:13:52.468 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:52.468 "strip_size_kb": 0, 00:13:52.468 "state": "online", 00:13:52.468 "raid_level": "raid1", 00:13:52.468 "superblock": true, 00:13:52.468 "num_base_bdevs": 4, 00:13:52.468 "num_base_bdevs_discovered": 3, 00:13:52.468 "num_base_bdevs_operational": 3, 00:13:52.468 "base_bdevs_list": [ 00:13:52.468 { 00:13:52.468 "name": null, 00:13:52.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.468 "is_configured": false, 00:13:52.468 "data_offset": 0, 00:13:52.468 "data_size": 63488 00:13:52.468 }, 00:13:52.468 { 00:13:52.468 "name": "BaseBdev2", 00:13:52.468 "uuid": "61c95e14-b68d-592e-a32a-983975a605b5", 00:13:52.468 "is_configured": true, 00:13:52.468 "data_offset": 2048, 00:13:52.468 "data_size": 63488 00:13:52.468 }, 00:13:52.468 { 00:13:52.468 "name": "BaseBdev3", 00:13:52.468 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:52.468 "is_configured": true, 00:13:52.468 "data_offset": 2048, 00:13:52.468 "data_size": 63488 00:13:52.468 }, 00:13:52.468 { 00:13:52.468 "name": "BaseBdev4", 00:13:52.468 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:52.468 "is_configured": true, 00:13:52.468 "data_offset": 2048, 00:13:52.468 "data_size": 63488 00:13:52.468 } 00:13:52.468 ] 00:13:52.468 }' 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.468 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.728 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.728 "name": "raid_bdev1", 00:13:52.728 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:52.728 "strip_size_kb": 0, 00:13:52.728 "state": "online", 00:13:52.728 "raid_level": "raid1", 00:13:52.728 "superblock": true, 00:13:52.728 "num_base_bdevs": 4, 00:13:52.728 "num_base_bdevs_discovered": 3, 00:13:52.728 "num_base_bdevs_operational": 3, 00:13:52.728 "base_bdevs_list": [ 00:13:52.728 { 00:13:52.728 "name": null, 00:13:52.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.728 "is_configured": false, 00:13:52.728 "data_offset": 0, 00:13:52.728 "data_size": 63488 00:13:52.728 }, 00:13:52.729 { 00:13:52.729 "name": "BaseBdev2", 00:13:52.729 "uuid": "61c95e14-b68d-592e-a32a-983975a605b5", 00:13:52.729 "is_configured": true, 00:13:52.729 "data_offset": 2048, 00:13:52.729 "data_size": 63488 00:13:52.729 }, 00:13:52.729 { 00:13:52.729 "name": "BaseBdev3", 00:13:52.729 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:52.729 "is_configured": true, 00:13:52.729 "data_offset": 2048, 00:13:52.729 "data_size": 63488 00:13:52.729 }, 00:13:52.729 { 00:13:52.729 "name": "BaseBdev4", 00:13:52.729 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:52.729 "is_configured": true, 00:13:52.729 "data_offset": 2048, 00:13:52.729 "data_size": 63488 00:13:52.729 } 00:13:52.729 ] 00:13:52.729 }' 00:13:52.729 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.989 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.989 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.989 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.989 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:52.989 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.989 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.989 [2024-09-29 21:45:11.787087] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:52.989 [2024-09-29 21:45:11.800381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:52.989 21:45:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.989 21:45:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:52.989 [2024-09-29 21:45:11.802104] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.930 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.930 "name": "raid_bdev1", 00:13:53.930 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:53.930 "strip_size_kb": 0, 00:13:53.930 "state": "online", 00:13:53.930 "raid_level": "raid1", 00:13:53.930 "superblock": true, 00:13:53.930 "num_base_bdevs": 4, 00:13:53.930 "num_base_bdevs_discovered": 4, 00:13:53.930 "num_base_bdevs_operational": 4, 00:13:53.930 "process": { 00:13:53.930 "type": "rebuild", 00:13:53.930 "target": "spare", 00:13:53.930 "progress": { 00:13:53.930 "blocks": 20480, 00:13:53.930 "percent": 32 00:13:53.930 } 00:13:53.930 }, 00:13:53.930 "base_bdevs_list": [ 00:13:53.930 { 00:13:53.930 "name": "spare", 00:13:53.931 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:53.931 "is_configured": true, 00:13:53.931 "data_offset": 2048, 00:13:53.931 "data_size": 63488 00:13:53.931 }, 00:13:53.931 { 00:13:53.931 "name": "BaseBdev2", 00:13:53.931 "uuid": "61c95e14-b68d-592e-a32a-983975a605b5", 00:13:53.931 "is_configured": true, 00:13:53.931 "data_offset": 2048, 00:13:53.931 "data_size": 63488 00:13:53.931 }, 00:13:53.931 { 00:13:53.931 "name": "BaseBdev3", 00:13:53.931 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:53.931 "is_configured": true, 00:13:53.931 "data_offset": 2048, 00:13:53.931 "data_size": 63488 00:13:53.931 }, 00:13:53.931 { 00:13:53.931 "name": "BaseBdev4", 00:13:53.931 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:53.931 "is_configured": true, 00:13:53.931 "data_offset": 2048, 00:13:53.931 "data_size": 63488 00:13:53.931 } 00:13:53.931 ] 00:13:53.931 }' 00:13:53.931 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.931 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.931 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.191 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.191 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:54.191 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:54.191 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:54.191 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:54.191 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:54.191 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:54.191 21:45:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:54.191 21:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.191 21:45:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.191 [2024-09-29 21:45:12.962144] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.191 [2024-09-29 21:45:13.106393] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.191 "name": "raid_bdev1", 00:13:54.191 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:54.191 "strip_size_kb": 0, 00:13:54.191 "state": "online", 00:13:54.191 "raid_level": "raid1", 00:13:54.191 "superblock": true, 00:13:54.191 "num_base_bdevs": 4, 00:13:54.191 "num_base_bdevs_discovered": 3, 00:13:54.191 "num_base_bdevs_operational": 3, 00:13:54.191 "process": { 00:13:54.191 "type": "rebuild", 00:13:54.191 "target": "spare", 00:13:54.191 "progress": { 00:13:54.191 "blocks": 24576, 00:13:54.191 "percent": 38 00:13:54.191 } 00:13:54.191 }, 00:13:54.191 "base_bdevs_list": [ 00:13:54.191 { 00:13:54.191 "name": "spare", 00:13:54.191 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:54.191 "is_configured": true, 00:13:54.191 "data_offset": 2048, 00:13:54.191 "data_size": 63488 00:13:54.191 }, 00:13:54.191 { 00:13:54.191 "name": null, 00:13:54.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.191 "is_configured": false, 00:13:54.191 "data_offset": 0, 00:13:54.191 "data_size": 63488 00:13:54.191 }, 00:13:54.191 { 00:13:54.191 "name": "BaseBdev3", 00:13:54.191 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:54.191 "is_configured": true, 00:13:54.191 "data_offset": 2048, 00:13:54.191 "data_size": 63488 00:13:54.191 }, 00:13:54.191 { 00:13:54.191 "name": "BaseBdev4", 00:13:54.191 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:54.191 "is_configured": true, 00:13:54.191 "data_offset": 2048, 00:13:54.191 "data_size": 63488 00:13:54.191 } 00:13:54.191 ] 00:13:54.191 }' 00:13:54.191 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=469 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.451 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.451 "name": "raid_bdev1", 00:13:54.451 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:54.451 "strip_size_kb": 0, 00:13:54.451 "state": "online", 00:13:54.451 "raid_level": "raid1", 00:13:54.451 "superblock": true, 00:13:54.451 "num_base_bdevs": 4, 00:13:54.451 "num_base_bdevs_discovered": 3, 00:13:54.452 "num_base_bdevs_operational": 3, 00:13:54.452 "process": { 00:13:54.452 "type": "rebuild", 00:13:54.452 "target": "spare", 00:13:54.452 "progress": { 00:13:54.452 "blocks": 26624, 00:13:54.452 "percent": 41 00:13:54.452 } 00:13:54.452 }, 00:13:54.452 "base_bdevs_list": [ 00:13:54.452 { 00:13:54.452 "name": "spare", 00:13:54.452 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:54.452 "is_configured": true, 00:13:54.452 "data_offset": 2048, 00:13:54.452 "data_size": 63488 00:13:54.452 }, 00:13:54.452 { 00:13:54.452 "name": null, 00:13:54.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.452 "is_configured": false, 00:13:54.452 "data_offset": 0, 00:13:54.452 "data_size": 63488 00:13:54.452 }, 00:13:54.452 { 00:13:54.452 "name": "BaseBdev3", 00:13:54.452 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:54.452 "is_configured": true, 00:13:54.452 "data_offset": 2048, 00:13:54.452 "data_size": 63488 00:13:54.452 }, 00:13:54.452 { 00:13:54.452 "name": "BaseBdev4", 00:13:54.452 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:54.452 "is_configured": true, 00:13:54.452 "data_offset": 2048, 00:13:54.452 "data_size": 63488 00:13:54.452 } 00:13:54.452 ] 00:13:54.452 }' 00:13:54.452 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.452 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.452 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.452 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.452 21:45:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.392 21:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.652 21:45:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.652 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.652 "name": "raid_bdev1", 00:13:55.652 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:55.652 "strip_size_kb": 0, 00:13:55.652 "state": "online", 00:13:55.652 "raid_level": "raid1", 00:13:55.652 "superblock": true, 00:13:55.652 "num_base_bdevs": 4, 00:13:55.652 "num_base_bdevs_discovered": 3, 00:13:55.652 "num_base_bdevs_operational": 3, 00:13:55.652 "process": { 00:13:55.652 "type": "rebuild", 00:13:55.652 "target": "spare", 00:13:55.652 "progress": { 00:13:55.652 "blocks": 49152, 00:13:55.652 "percent": 77 00:13:55.652 } 00:13:55.652 }, 00:13:55.652 "base_bdevs_list": [ 00:13:55.652 { 00:13:55.652 "name": "spare", 00:13:55.652 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:55.652 "is_configured": true, 00:13:55.652 "data_offset": 2048, 00:13:55.652 "data_size": 63488 00:13:55.652 }, 00:13:55.652 { 00:13:55.652 "name": null, 00:13:55.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.652 "is_configured": false, 00:13:55.652 "data_offset": 0, 00:13:55.652 "data_size": 63488 00:13:55.652 }, 00:13:55.652 { 00:13:55.652 "name": "BaseBdev3", 00:13:55.652 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:55.652 "is_configured": true, 00:13:55.652 "data_offset": 2048, 00:13:55.652 "data_size": 63488 00:13:55.652 }, 00:13:55.652 { 00:13:55.652 "name": "BaseBdev4", 00:13:55.652 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:55.652 "is_configured": true, 00:13:55.652 "data_offset": 2048, 00:13:55.652 "data_size": 63488 00:13:55.652 } 00:13:55.652 ] 00:13:55.652 }' 00:13:55.652 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.652 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.652 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.652 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.652 21:45:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.222 [2024-09-29 21:45:15.013649] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:56.222 [2024-09-29 21:45:15.013718] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:56.222 [2024-09-29 21:45:15.013826] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.792 "name": "raid_bdev1", 00:13:56.792 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:56.792 "strip_size_kb": 0, 00:13:56.792 "state": "online", 00:13:56.792 "raid_level": "raid1", 00:13:56.792 "superblock": true, 00:13:56.792 "num_base_bdevs": 4, 00:13:56.792 "num_base_bdevs_discovered": 3, 00:13:56.792 "num_base_bdevs_operational": 3, 00:13:56.792 "base_bdevs_list": [ 00:13:56.792 { 00:13:56.792 "name": "spare", 00:13:56.792 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:56.792 "is_configured": true, 00:13:56.792 "data_offset": 2048, 00:13:56.792 "data_size": 63488 00:13:56.792 }, 00:13:56.792 { 00:13:56.792 "name": null, 00:13:56.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.792 "is_configured": false, 00:13:56.792 "data_offset": 0, 00:13:56.792 "data_size": 63488 00:13:56.792 }, 00:13:56.792 { 00:13:56.792 "name": "BaseBdev3", 00:13:56.792 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:56.792 "is_configured": true, 00:13:56.792 "data_offset": 2048, 00:13:56.792 "data_size": 63488 00:13:56.792 }, 00:13:56.792 { 00:13:56.792 "name": "BaseBdev4", 00:13:56.792 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:56.792 "is_configured": true, 00:13:56.792 "data_offset": 2048, 00:13:56.792 "data_size": 63488 00:13:56.792 } 00:13:56.792 ] 00:13:56.792 }' 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.792 "name": "raid_bdev1", 00:13:56.792 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:56.792 "strip_size_kb": 0, 00:13:56.792 "state": "online", 00:13:56.792 "raid_level": "raid1", 00:13:56.792 "superblock": true, 00:13:56.792 "num_base_bdevs": 4, 00:13:56.792 "num_base_bdevs_discovered": 3, 00:13:56.792 "num_base_bdevs_operational": 3, 00:13:56.792 "base_bdevs_list": [ 00:13:56.792 { 00:13:56.792 "name": "spare", 00:13:56.792 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:56.792 "is_configured": true, 00:13:56.792 "data_offset": 2048, 00:13:56.792 "data_size": 63488 00:13:56.792 }, 00:13:56.792 { 00:13:56.792 "name": null, 00:13:56.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.792 "is_configured": false, 00:13:56.792 "data_offset": 0, 00:13:56.792 "data_size": 63488 00:13:56.792 }, 00:13:56.792 { 00:13:56.792 "name": "BaseBdev3", 00:13:56.792 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:56.792 "is_configured": true, 00:13:56.792 "data_offset": 2048, 00:13:56.792 "data_size": 63488 00:13:56.792 }, 00:13:56.792 { 00:13:56.792 "name": "BaseBdev4", 00:13:56.792 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:56.792 "is_configured": true, 00:13:56.792 "data_offset": 2048, 00:13:56.792 "data_size": 63488 00:13:56.792 } 00:13:56.792 ] 00:13:56.792 }' 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.792 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.052 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.052 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.052 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.052 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.052 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.052 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.052 "name": "raid_bdev1", 00:13:57.052 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:57.052 "strip_size_kb": 0, 00:13:57.052 "state": "online", 00:13:57.052 "raid_level": "raid1", 00:13:57.052 "superblock": true, 00:13:57.052 "num_base_bdevs": 4, 00:13:57.052 "num_base_bdevs_discovered": 3, 00:13:57.052 "num_base_bdevs_operational": 3, 00:13:57.052 "base_bdevs_list": [ 00:13:57.052 { 00:13:57.052 "name": "spare", 00:13:57.052 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:57.052 "is_configured": true, 00:13:57.052 "data_offset": 2048, 00:13:57.052 "data_size": 63488 00:13:57.052 }, 00:13:57.052 { 00:13:57.052 "name": null, 00:13:57.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.052 "is_configured": false, 00:13:57.052 "data_offset": 0, 00:13:57.052 "data_size": 63488 00:13:57.052 }, 00:13:57.052 { 00:13:57.052 "name": "BaseBdev3", 00:13:57.052 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:57.052 "is_configured": true, 00:13:57.052 "data_offset": 2048, 00:13:57.052 "data_size": 63488 00:13:57.052 }, 00:13:57.052 { 00:13:57.052 "name": "BaseBdev4", 00:13:57.052 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:57.052 "is_configured": true, 00:13:57.052 "data_offset": 2048, 00:13:57.052 "data_size": 63488 00:13:57.052 } 00:13:57.052 ] 00:13:57.052 }' 00:13:57.052 21:45:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.052 21:45:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.312 [2024-09-29 21:45:16.194481] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.312 [2024-09-29 21:45:16.194561] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.312 [2024-09-29 21:45:16.194650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.312 [2024-09-29 21:45:16.194737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.312 [2024-09-29 21:45:16.194768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:57.312 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:57.571 /dev/nbd0 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.571 1+0 records in 00:13:57.571 1+0 records out 00:13:57.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333836 s, 12.3 MB/s 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:57.571 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:57.830 /dev/nbd1 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.830 1+0 records in 00:13:57.830 1+0 records out 00:13:57.830 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450234 s, 9.1 MB/s 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.830 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:57.831 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:58.090 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:58.090 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.090 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:58.090 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:58.090 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:58.090 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.090 21:45:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.350 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.611 [2024-09-29 21:45:17.369481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:58.611 [2024-09-29 21:45:17.369589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.611 [2024-09-29 21:45:17.369614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:58.611 [2024-09-29 21:45:17.369623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.611 [2024-09-29 21:45:17.371568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.611 [2024-09-29 21:45:17.371606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:58.611 [2024-09-29 21:45:17.371716] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:58.611 [2024-09-29 21:45:17.371770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.611 [2024-09-29 21:45:17.371860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.611 [2024-09-29 21:45:17.371950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:58.611 spare 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.611 [2024-09-29 21:45:17.471833] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:58.611 [2024-09-29 21:45:17.471858] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:58.611 [2024-09-29 21:45:17.472119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:58.611 [2024-09-29 21:45:17.472291] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:58.611 [2024-09-29 21:45:17.472306] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:58.611 [2024-09-29 21:45:17.472435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.611 "name": "raid_bdev1", 00:13:58.611 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:58.611 "strip_size_kb": 0, 00:13:58.611 "state": "online", 00:13:58.611 "raid_level": "raid1", 00:13:58.611 "superblock": true, 00:13:58.611 "num_base_bdevs": 4, 00:13:58.611 "num_base_bdevs_discovered": 3, 00:13:58.611 "num_base_bdevs_operational": 3, 00:13:58.611 "base_bdevs_list": [ 00:13:58.611 { 00:13:58.611 "name": "spare", 00:13:58.611 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:58.611 "is_configured": true, 00:13:58.611 "data_offset": 2048, 00:13:58.611 "data_size": 63488 00:13:58.611 }, 00:13:58.611 { 00:13:58.611 "name": null, 00:13:58.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.611 "is_configured": false, 00:13:58.611 "data_offset": 2048, 00:13:58.611 "data_size": 63488 00:13:58.611 }, 00:13:58.611 { 00:13:58.611 "name": "BaseBdev3", 00:13:58.611 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:58.611 "is_configured": true, 00:13:58.611 "data_offset": 2048, 00:13:58.611 "data_size": 63488 00:13:58.611 }, 00:13:58.611 { 00:13:58.611 "name": "BaseBdev4", 00:13:58.611 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:58.611 "is_configured": true, 00:13:58.611 "data_offset": 2048, 00:13:58.611 "data_size": 63488 00:13:58.611 } 00:13:58.611 ] 00:13:58.611 }' 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.611 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.182 "name": "raid_bdev1", 00:13:59.182 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:59.182 "strip_size_kb": 0, 00:13:59.182 "state": "online", 00:13:59.182 "raid_level": "raid1", 00:13:59.182 "superblock": true, 00:13:59.182 "num_base_bdevs": 4, 00:13:59.182 "num_base_bdevs_discovered": 3, 00:13:59.182 "num_base_bdevs_operational": 3, 00:13:59.182 "base_bdevs_list": [ 00:13:59.182 { 00:13:59.182 "name": "spare", 00:13:59.182 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:13:59.182 "is_configured": true, 00:13:59.182 "data_offset": 2048, 00:13:59.182 "data_size": 63488 00:13:59.182 }, 00:13:59.182 { 00:13:59.182 "name": null, 00:13:59.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.182 "is_configured": false, 00:13:59.182 "data_offset": 2048, 00:13:59.182 "data_size": 63488 00:13:59.182 }, 00:13:59.182 { 00:13:59.182 "name": "BaseBdev3", 00:13:59.182 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:59.182 "is_configured": true, 00:13:59.182 "data_offset": 2048, 00:13:59.182 "data_size": 63488 00:13:59.182 }, 00:13:59.182 { 00:13:59.182 "name": "BaseBdev4", 00:13:59.182 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:59.182 "is_configured": true, 00:13:59.182 "data_offset": 2048, 00:13:59.182 "data_size": 63488 00:13:59.182 } 00:13:59.182 ] 00:13:59.182 }' 00:13:59.182 21:45:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.182 [2024-09-29 21:45:18.140250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.182 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.442 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.442 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.442 "name": "raid_bdev1", 00:13:59.442 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:13:59.442 "strip_size_kb": 0, 00:13:59.442 "state": "online", 00:13:59.442 "raid_level": "raid1", 00:13:59.442 "superblock": true, 00:13:59.442 "num_base_bdevs": 4, 00:13:59.442 "num_base_bdevs_discovered": 2, 00:13:59.442 "num_base_bdevs_operational": 2, 00:13:59.442 "base_bdevs_list": [ 00:13:59.442 { 00:13:59.442 "name": null, 00:13:59.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.442 "is_configured": false, 00:13:59.442 "data_offset": 0, 00:13:59.442 "data_size": 63488 00:13:59.442 }, 00:13:59.442 { 00:13:59.442 "name": null, 00:13:59.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.442 "is_configured": false, 00:13:59.442 "data_offset": 2048, 00:13:59.442 "data_size": 63488 00:13:59.442 }, 00:13:59.442 { 00:13:59.442 "name": "BaseBdev3", 00:13:59.442 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:13:59.442 "is_configured": true, 00:13:59.442 "data_offset": 2048, 00:13:59.442 "data_size": 63488 00:13:59.442 }, 00:13:59.442 { 00:13:59.442 "name": "BaseBdev4", 00:13:59.442 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:13:59.442 "is_configured": true, 00:13:59.442 "data_offset": 2048, 00:13:59.442 "data_size": 63488 00:13:59.442 } 00:13:59.442 ] 00:13:59.442 }' 00:13:59.442 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.442 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.702 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.702 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.702 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.702 [2024-09-29 21:45:18.627942] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.702 [2024-09-29 21:45:18.628145] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:59.702 [2024-09-29 21:45:18.628221] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:59.702 [2024-09-29 21:45:18.628276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.702 [2024-09-29 21:45:18.640898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:59.702 21:45:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.702 21:45:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:59.702 [2024-09-29 21:45:18.642721] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.082 "name": "raid_bdev1", 00:14:01.082 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:14:01.082 "strip_size_kb": 0, 00:14:01.082 "state": "online", 00:14:01.082 "raid_level": "raid1", 00:14:01.082 "superblock": true, 00:14:01.082 "num_base_bdevs": 4, 00:14:01.082 "num_base_bdevs_discovered": 3, 00:14:01.082 "num_base_bdevs_operational": 3, 00:14:01.082 "process": { 00:14:01.082 "type": "rebuild", 00:14:01.082 "target": "spare", 00:14:01.082 "progress": { 00:14:01.082 "blocks": 20480, 00:14:01.082 "percent": 32 00:14:01.082 } 00:14:01.082 }, 00:14:01.082 "base_bdevs_list": [ 00:14:01.082 { 00:14:01.082 "name": "spare", 00:14:01.082 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:14:01.082 "is_configured": true, 00:14:01.082 "data_offset": 2048, 00:14:01.082 "data_size": 63488 00:14:01.082 }, 00:14:01.082 { 00:14:01.082 "name": null, 00:14:01.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.082 "is_configured": false, 00:14:01.082 "data_offset": 2048, 00:14:01.082 "data_size": 63488 00:14:01.082 }, 00:14:01.082 { 00:14:01.082 "name": "BaseBdev3", 00:14:01.082 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:14:01.082 "is_configured": true, 00:14:01.082 "data_offset": 2048, 00:14:01.082 "data_size": 63488 00:14:01.082 }, 00:14:01.082 { 00:14:01.082 "name": "BaseBdev4", 00:14:01.082 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:14:01.082 "is_configured": true, 00:14:01.082 "data_offset": 2048, 00:14:01.082 "data_size": 63488 00:14:01.082 } 00:14:01.082 ] 00:14:01.082 }' 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.082 [2024-09-29 21:45:19.786597] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.082 [2024-09-29 21:45:19.847478] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:01.082 [2024-09-29 21:45:19.847580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.082 [2024-09-29 21:45:19.847615] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.082 [2024-09-29 21:45:19.847635] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.082 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.082 "name": "raid_bdev1", 00:14:01.082 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:14:01.082 "strip_size_kb": 0, 00:14:01.082 "state": "online", 00:14:01.082 "raid_level": "raid1", 00:14:01.082 "superblock": true, 00:14:01.082 "num_base_bdevs": 4, 00:14:01.082 "num_base_bdevs_discovered": 2, 00:14:01.082 "num_base_bdevs_operational": 2, 00:14:01.082 "base_bdevs_list": [ 00:14:01.082 { 00:14:01.082 "name": null, 00:14:01.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.083 "is_configured": false, 00:14:01.083 "data_offset": 0, 00:14:01.083 "data_size": 63488 00:14:01.083 }, 00:14:01.083 { 00:14:01.083 "name": null, 00:14:01.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.083 "is_configured": false, 00:14:01.083 "data_offset": 2048, 00:14:01.083 "data_size": 63488 00:14:01.083 }, 00:14:01.083 { 00:14:01.083 "name": "BaseBdev3", 00:14:01.083 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:14:01.083 "is_configured": true, 00:14:01.083 "data_offset": 2048, 00:14:01.083 "data_size": 63488 00:14:01.083 }, 00:14:01.083 { 00:14:01.083 "name": "BaseBdev4", 00:14:01.083 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:14:01.083 "is_configured": true, 00:14:01.083 "data_offset": 2048, 00:14:01.083 "data_size": 63488 00:14:01.083 } 00:14:01.083 ] 00:14:01.083 }' 00:14:01.083 21:45:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.083 21:45:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.342 21:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:01.342 21:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.342 21:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.342 [2024-09-29 21:45:20.318267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:01.342 [2024-09-29 21:45:20.318323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.342 [2024-09-29 21:45:20.318349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:01.342 [2024-09-29 21:45:20.318358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.342 [2024-09-29 21:45:20.318800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.342 [2024-09-29 21:45:20.318817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:01.342 [2024-09-29 21:45:20.318893] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:01.342 [2024-09-29 21:45:20.318905] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:01.342 [2024-09-29 21:45:20.318918] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:01.342 [2024-09-29 21:45:20.318937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.602 [2024-09-29 21:45:20.331656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:01.602 spare 00:14:01.602 21:45:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.602 21:45:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:01.602 [2024-09-29 21:45:20.333406] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.543 "name": "raid_bdev1", 00:14:02.543 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:14:02.543 "strip_size_kb": 0, 00:14:02.543 "state": "online", 00:14:02.543 "raid_level": "raid1", 00:14:02.543 "superblock": true, 00:14:02.543 "num_base_bdevs": 4, 00:14:02.543 "num_base_bdevs_discovered": 3, 00:14:02.543 "num_base_bdevs_operational": 3, 00:14:02.543 "process": { 00:14:02.543 "type": "rebuild", 00:14:02.543 "target": "spare", 00:14:02.543 "progress": { 00:14:02.543 "blocks": 20480, 00:14:02.543 "percent": 32 00:14:02.543 } 00:14:02.543 }, 00:14:02.543 "base_bdevs_list": [ 00:14:02.543 { 00:14:02.543 "name": "spare", 00:14:02.543 "uuid": "1a80549a-98b7-5f37-a1e9-1d75c4d68eb4", 00:14:02.543 "is_configured": true, 00:14:02.543 "data_offset": 2048, 00:14:02.543 "data_size": 63488 00:14:02.543 }, 00:14:02.543 { 00:14:02.543 "name": null, 00:14:02.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.543 "is_configured": false, 00:14:02.543 "data_offset": 2048, 00:14:02.543 "data_size": 63488 00:14:02.543 }, 00:14:02.543 { 00:14:02.543 "name": "BaseBdev3", 00:14:02.543 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:14:02.543 "is_configured": true, 00:14:02.543 "data_offset": 2048, 00:14:02.543 "data_size": 63488 00:14:02.543 }, 00:14:02.543 { 00:14:02.543 "name": "BaseBdev4", 00:14:02.543 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:14:02.543 "is_configured": true, 00:14:02.543 "data_offset": 2048, 00:14:02.543 "data_size": 63488 00:14:02.543 } 00:14:02.543 ] 00:14:02.543 }' 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.543 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.543 [2024-09-29 21:45:21.469548] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.803 [2024-09-29 21:45:21.537891] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.803 [2024-09-29 21:45:21.537949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.803 [2024-09-29 21:45:21.537963] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.803 [2024-09-29 21:45:21.537972] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.803 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.803 "name": "raid_bdev1", 00:14:02.803 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:14:02.803 "strip_size_kb": 0, 00:14:02.803 "state": "online", 00:14:02.803 "raid_level": "raid1", 00:14:02.804 "superblock": true, 00:14:02.804 "num_base_bdevs": 4, 00:14:02.804 "num_base_bdevs_discovered": 2, 00:14:02.804 "num_base_bdevs_operational": 2, 00:14:02.804 "base_bdevs_list": [ 00:14:02.804 { 00:14:02.804 "name": null, 00:14:02.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.804 "is_configured": false, 00:14:02.804 "data_offset": 0, 00:14:02.804 "data_size": 63488 00:14:02.804 }, 00:14:02.804 { 00:14:02.804 "name": null, 00:14:02.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.804 "is_configured": false, 00:14:02.804 "data_offset": 2048, 00:14:02.804 "data_size": 63488 00:14:02.804 }, 00:14:02.804 { 00:14:02.804 "name": "BaseBdev3", 00:14:02.804 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:14:02.804 "is_configured": true, 00:14:02.804 "data_offset": 2048, 00:14:02.804 "data_size": 63488 00:14:02.804 }, 00:14:02.804 { 00:14:02.804 "name": "BaseBdev4", 00:14:02.804 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:14:02.804 "is_configured": true, 00:14:02.804 "data_offset": 2048, 00:14:02.804 "data_size": 63488 00:14:02.804 } 00:14:02.804 ] 00:14:02.804 }' 00:14:02.804 21:45:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.804 21:45:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.063 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.063 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.063 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.063 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.063 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.063 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.064 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.064 21:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.064 21:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.064 21:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.323 "name": "raid_bdev1", 00:14:03.323 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:14:03.323 "strip_size_kb": 0, 00:14:03.323 "state": "online", 00:14:03.323 "raid_level": "raid1", 00:14:03.323 "superblock": true, 00:14:03.323 "num_base_bdevs": 4, 00:14:03.323 "num_base_bdevs_discovered": 2, 00:14:03.323 "num_base_bdevs_operational": 2, 00:14:03.323 "base_bdevs_list": [ 00:14:03.323 { 00:14:03.323 "name": null, 00:14:03.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.323 "is_configured": false, 00:14:03.323 "data_offset": 0, 00:14:03.323 "data_size": 63488 00:14:03.323 }, 00:14:03.323 { 00:14:03.323 "name": null, 00:14:03.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.323 "is_configured": false, 00:14:03.323 "data_offset": 2048, 00:14:03.323 "data_size": 63488 00:14:03.323 }, 00:14:03.323 { 00:14:03.323 "name": "BaseBdev3", 00:14:03.323 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:14:03.323 "is_configured": true, 00:14:03.323 "data_offset": 2048, 00:14:03.323 "data_size": 63488 00:14:03.323 }, 00:14:03.323 { 00:14:03.323 "name": "BaseBdev4", 00:14:03.323 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:14:03.323 "is_configured": true, 00:14:03.323 "data_offset": 2048, 00:14:03.323 "data_size": 63488 00:14:03.323 } 00:14:03.323 ] 00:14:03.323 }' 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.323 21:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.323 [2024-09-29 21:45:22.155834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:03.324 [2024-09-29 21:45:22.155889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.324 [2024-09-29 21:45:22.155909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:03.324 [2024-09-29 21:45:22.155919] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.324 [2024-09-29 21:45:22.156360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.324 [2024-09-29 21:45:22.156382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:03.324 [2024-09-29 21:45:22.156452] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:03.324 [2024-09-29 21:45:22.156474] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:03.324 [2024-09-29 21:45:22.156485] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:03.324 [2024-09-29 21:45:22.156498] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:03.324 BaseBdev1 00:14:03.324 21:45:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.324 21:45:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:04.262 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:04.262 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.262 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.263 "name": "raid_bdev1", 00:14:04.263 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:14:04.263 "strip_size_kb": 0, 00:14:04.263 "state": "online", 00:14:04.263 "raid_level": "raid1", 00:14:04.263 "superblock": true, 00:14:04.263 "num_base_bdevs": 4, 00:14:04.263 "num_base_bdevs_discovered": 2, 00:14:04.263 "num_base_bdevs_operational": 2, 00:14:04.263 "base_bdevs_list": [ 00:14:04.263 { 00:14:04.263 "name": null, 00:14:04.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.263 "is_configured": false, 00:14:04.263 "data_offset": 0, 00:14:04.263 "data_size": 63488 00:14:04.263 }, 00:14:04.263 { 00:14:04.263 "name": null, 00:14:04.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.263 "is_configured": false, 00:14:04.263 "data_offset": 2048, 00:14:04.263 "data_size": 63488 00:14:04.263 }, 00:14:04.263 { 00:14:04.263 "name": "BaseBdev3", 00:14:04.263 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:14:04.263 "is_configured": true, 00:14:04.263 "data_offset": 2048, 00:14:04.263 "data_size": 63488 00:14:04.263 }, 00:14:04.263 { 00:14:04.263 "name": "BaseBdev4", 00:14:04.263 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:14:04.263 "is_configured": true, 00:14:04.263 "data_offset": 2048, 00:14:04.263 "data_size": 63488 00:14:04.263 } 00:14:04.263 ] 00:14:04.263 }' 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.263 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.832 "name": "raid_bdev1", 00:14:04.832 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:14:04.832 "strip_size_kb": 0, 00:14:04.832 "state": "online", 00:14:04.832 "raid_level": "raid1", 00:14:04.832 "superblock": true, 00:14:04.832 "num_base_bdevs": 4, 00:14:04.832 "num_base_bdevs_discovered": 2, 00:14:04.832 "num_base_bdevs_operational": 2, 00:14:04.832 "base_bdevs_list": [ 00:14:04.832 { 00:14:04.832 "name": null, 00:14:04.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.832 "is_configured": false, 00:14:04.832 "data_offset": 0, 00:14:04.832 "data_size": 63488 00:14:04.832 }, 00:14:04.832 { 00:14:04.832 "name": null, 00:14:04.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.832 "is_configured": false, 00:14:04.832 "data_offset": 2048, 00:14:04.832 "data_size": 63488 00:14:04.832 }, 00:14:04.832 { 00:14:04.832 "name": "BaseBdev3", 00:14:04.832 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:14:04.832 "is_configured": true, 00:14:04.832 "data_offset": 2048, 00:14:04.832 "data_size": 63488 00:14:04.832 }, 00:14:04.832 { 00:14:04.832 "name": "BaseBdev4", 00:14:04.832 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:14:04.832 "is_configured": true, 00:14:04.832 "data_offset": 2048, 00:14:04.832 "data_size": 63488 00:14:04.832 } 00:14:04.832 ] 00:14:04.832 }' 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.832 [2024-09-29 21:45:23.749125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.832 [2024-09-29 21:45:23.749286] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:04.832 [2024-09-29 21:45:23.749299] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:04.832 request: 00:14:04.832 { 00:14:04.832 "base_bdev": "BaseBdev1", 00:14:04.832 "raid_bdev": "raid_bdev1", 00:14:04.832 "method": "bdev_raid_add_base_bdev", 00:14:04.832 "req_id": 1 00:14:04.832 } 00:14:04.832 Got JSON-RPC error response 00:14:04.832 response: 00:14:04.832 { 00:14:04.832 "code": -22, 00:14:04.832 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:04.832 } 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:04.832 21:45:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.211 "name": "raid_bdev1", 00:14:06.211 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:14:06.211 "strip_size_kb": 0, 00:14:06.211 "state": "online", 00:14:06.211 "raid_level": "raid1", 00:14:06.211 "superblock": true, 00:14:06.211 "num_base_bdevs": 4, 00:14:06.211 "num_base_bdevs_discovered": 2, 00:14:06.211 "num_base_bdevs_operational": 2, 00:14:06.211 "base_bdevs_list": [ 00:14:06.211 { 00:14:06.211 "name": null, 00:14:06.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.211 "is_configured": false, 00:14:06.211 "data_offset": 0, 00:14:06.211 "data_size": 63488 00:14:06.211 }, 00:14:06.211 { 00:14:06.211 "name": null, 00:14:06.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.211 "is_configured": false, 00:14:06.211 "data_offset": 2048, 00:14:06.211 "data_size": 63488 00:14:06.211 }, 00:14:06.211 { 00:14:06.211 "name": "BaseBdev3", 00:14:06.211 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:14:06.211 "is_configured": true, 00:14:06.211 "data_offset": 2048, 00:14:06.211 "data_size": 63488 00:14:06.211 }, 00:14:06.211 { 00:14:06.211 "name": "BaseBdev4", 00:14:06.211 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:14:06.211 "is_configured": true, 00:14:06.211 "data_offset": 2048, 00:14:06.211 "data_size": 63488 00:14:06.211 } 00:14:06.211 ] 00:14:06.211 }' 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.211 21:45:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.211 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.211 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.471 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.471 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.472 "name": "raid_bdev1", 00:14:06.472 "uuid": "a9631e97-c673-4479-ad2d-f64f7ed61230", 00:14:06.472 "strip_size_kb": 0, 00:14:06.472 "state": "online", 00:14:06.472 "raid_level": "raid1", 00:14:06.472 "superblock": true, 00:14:06.472 "num_base_bdevs": 4, 00:14:06.472 "num_base_bdevs_discovered": 2, 00:14:06.472 "num_base_bdevs_operational": 2, 00:14:06.472 "base_bdevs_list": [ 00:14:06.472 { 00:14:06.472 "name": null, 00:14:06.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.472 "is_configured": false, 00:14:06.472 "data_offset": 0, 00:14:06.472 "data_size": 63488 00:14:06.472 }, 00:14:06.472 { 00:14:06.472 "name": null, 00:14:06.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.472 "is_configured": false, 00:14:06.472 "data_offset": 2048, 00:14:06.472 "data_size": 63488 00:14:06.472 }, 00:14:06.472 { 00:14:06.472 "name": "BaseBdev3", 00:14:06.472 "uuid": "0985f655-defd-500f-8d76-4b07675009ef", 00:14:06.472 "is_configured": true, 00:14:06.472 "data_offset": 2048, 00:14:06.472 "data_size": 63488 00:14:06.472 }, 00:14:06.472 { 00:14:06.472 "name": "BaseBdev4", 00:14:06.472 "uuid": "749fa13d-cd12-5c6a-8bbe-49d90c900ea0", 00:14:06.472 "is_configured": true, 00:14:06.472 "data_offset": 2048, 00:14:06.472 "data_size": 63488 00:14:06.472 } 00:14:06.472 ] 00:14:06.472 }' 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78053 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78053 ']' 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78053 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78053 00:14:06.472 killing process with pid 78053 00:14:06.472 Received shutdown signal, test time was about 60.000000 seconds 00:14:06.472 00:14:06.472 Latency(us) 00:14:06.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.472 =================================================================================================================== 00:14:06.472 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78053' 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78053 00:14:06.472 [2024-09-29 21:45:25.363127] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.472 [2024-09-29 21:45:25.363249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.472 [2024-09-29 21:45:25.363308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.472 [2024-09-29 21:45:25.363318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:06.472 21:45:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78053 00:14:07.041 [2024-09-29 21:45:25.819907] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.422 21:45:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:08.422 00:14:08.422 real 0m24.982s 00:14:08.422 user 0m29.679s 00:14:08.422 sys 0m3.937s 00:14:08.422 21:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.422 21:45:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.422 ************************************ 00:14:08.422 END TEST raid_rebuild_test_sb 00:14:08.422 ************************************ 00:14:08.422 21:45:27 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:08.422 21:45:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:08.422 21:45:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.422 21:45:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.422 ************************************ 00:14:08.422 START TEST raid_rebuild_test_io 00:14:08.422 ************************************ 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78803 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78803 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78803 ']' 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:08.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:08.422 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.422 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:08.422 Zero copy mechanism will not be used. 00:14:08.422 [2024-09-29 21:45:27.184097] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:08.422 [2024-09-29 21:45:27.184243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78803 ] 00:14:08.422 [2024-09-29 21:45:27.351089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.681 [2024-09-29 21:45:27.545559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.940 [2024-09-29 21:45:27.738124] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.940 [2024-09-29 21:45:27.738220] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.199 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:09.199 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:14:09.199 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.199 21:45:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:09.199 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.199 21:45:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 BaseBdev1_malloc 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 [2024-09-29 21:45:28.023927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:09.199 [2024-09-29 21:45:28.024002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.199 [2024-09-29 21:45:28.024025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:09.199 [2024-09-29 21:45:28.024052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.199 [2024-09-29 21:45:28.025963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.199 [2024-09-29 21:45:28.026002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:09.199 BaseBdev1 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 BaseBdev2_malloc 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 [2024-09-29 21:45:28.084230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:09.199 [2024-09-29 21:45:28.084287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.199 [2024-09-29 21:45:28.084304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:09.199 [2024-09-29 21:45:28.084317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.199 [2024-09-29 21:45:28.086215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.199 [2024-09-29 21:45:28.086254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:09.199 BaseBdev2 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 BaseBdev3_malloc 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 [2024-09-29 21:45:28.132014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:09.199 [2024-09-29 21:45:28.132073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.199 [2024-09-29 21:45:28.132092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:09.199 [2024-09-29 21:45:28.132102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.199 [2024-09-29 21:45:28.133951] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.199 [2024-09-29 21:45:28.134085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:09.199 BaseBdev3 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.199 BaseBdev4_malloc 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.199 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.458 [2024-09-29 21:45:28.183928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:09.458 [2024-09-29 21:45:28.183980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.458 [2024-09-29 21:45:28.183997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:09.458 [2024-09-29 21:45:28.184006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.458 [2024-09-29 21:45:28.185984] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.458 [2024-09-29 21:45:28.186073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:09.458 BaseBdev4 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.458 spare_malloc 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.458 spare_delay 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.458 [2024-09-29 21:45:28.251278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:09.458 [2024-09-29 21:45:28.251333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.458 [2024-09-29 21:45:28.251351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:09.458 [2024-09-29 21:45:28.251362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.458 [2024-09-29 21:45:28.253251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.458 [2024-09-29 21:45:28.253346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:09.458 spare 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.458 [2024-09-29 21:45:28.263310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.458 [2024-09-29 21:45:28.264947] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.458 [2024-09-29 21:45:28.265011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:09.458 [2024-09-29 21:45:28.265071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:09.458 [2024-09-29 21:45:28.265141] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:09.458 [2024-09-29 21:45:28.265153] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:09.458 [2024-09-29 21:45:28.265377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:09.458 [2024-09-29 21:45:28.265526] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:09.458 [2024-09-29 21:45:28.265536] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:09.458 [2024-09-29 21:45:28.265682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.458 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.458 "name": "raid_bdev1", 00:14:09.458 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:09.458 "strip_size_kb": 0, 00:14:09.458 "state": "online", 00:14:09.458 "raid_level": "raid1", 00:14:09.458 "superblock": false, 00:14:09.458 "num_base_bdevs": 4, 00:14:09.458 "num_base_bdevs_discovered": 4, 00:14:09.458 "num_base_bdevs_operational": 4, 00:14:09.458 "base_bdevs_list": [ 00:14:09.458 { 00:14:09.458 "name": "BaseBdev1", 00:14:09.459 "uuid": "9d2b5d65-318f-5feb-abe3-2ca9741d632b", 00:14:09.459 "is_configured": true, 00:14:09.459 "data_offset": 0, 00:14:09.459 "data_size": 65536 00:14:09.459 }, 00:14:09.459 { 00:14:09.459 "name": "BaseBdev2", 00:14:09.459 "uuid": "1bcb531c-26eb-5aaf-b9e7-7aa4feb5ece9", 00:14:09.459 "is_configured": true, 00:14:09.459 "data_offset": 0, 00:14:09.459 "data_size": 65536 00:14:09.459 }, 00:14:09.459 { 00:14:09.459 "name": "BaseBdev3", 00:14:09.459 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:09.459 "is_configured": true, 00:14:09.459 "data_offset": 0, 00:14:09.459 "data_size": 65536 00:14:09.459 }, 00:14:09.459 { 00:14:09.459 "name": "BaseBdev4", 00:14:09.459 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:09.459 "is_configured": true, 00:14:09.459 "data_offset": 0, 00:14:09.459 "data_size": 65536 00:14:09.459 } 00:14:09.459 ] 00:14:09.459 }' 00:14:09.459 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.459 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.025 [2024-09-29 21:45:28.742734] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.025 [2024-09-29 21:45:28.838250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.025 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.026 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.026 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.026 "name": "raid_bdev1", 00:14:10.026 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:10.026 "strip_size_kb": 0, 00:14:10.026 "state": "online", 00:14:10.026 "raid_level": "raid1", 00:14:10.026 "superblock": false, 00:14:10.026 "num_base_bdevs": 4, 00:14:10.026 "num_base_bdevs_discovered": 3, 00:14:10.026 "num_base_bdevs_operational": 3, 00:14:10.026 "base_bdevs_list": [ 00:14:10.026 { 00:14:10.026 "name": null, 00:14:10.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.026 "is_configured": false, 00:14:10.026 "data_offset": 0, 00:14:10.026 "data_size": 65536 00:14:10.026 }, 00:14:10.026 { 00:14:10.026 "name": "BaseBdev2", 00:14:10.026 "uuid": "1bcb531c-26eb-5aaf-b9e7-7aa4feb5ece9", 00:14:10.026 "is_configured": true, 00:14:10.026 "data_offset": 0, 00:14:10.026 "data_size": 65536 00:14:10.026 }, 00:14:10.026 { 00:14:10.026 "name": "BaseBdev3", 00:14:10.026 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:10.026 "is_configured": true, 00:14:10.026 "data_offset": 0, 00:14:10.026 "data_size": 65536 00:14:10.026 }, 00:14:10.026 { 00:14:10.026 "name": "BaseBdev4", 00:14:10.026 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:10.026 "is_configured": true, 00:14:10.026 "data_offset": 0, 00:14:10.026 "data_size": 65536 00:14:10.026 } 00:14:10.026 ] 00:14:10.026 }' 00:14:10.026 21:45:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.026 21:45:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.026 [2024-09-29 21:45:28.909608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:10.026 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.026 Zero copy mechanism will not be used. 00:14:10.026 Running I/O for 60 seconds... 00:14:10.592 21:45:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:10.592 21:45:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.592 21:45:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.592 [2024-09-29 21:45:29.276242] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.592 21:45:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.592 21:45:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:10.592 [2024-09-29 21:45:29.309086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:10.592 [2024-09-29 21:45:29.310951] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.592 [2024-09-29 21:45:29.425316] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:10.592 [2024-09-29 21:45:29.425864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:10.851 [2024-09-29 21:45:29.636503] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:10.851 [2024-09-29 21:45:29.637369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:11.110 145.00 IOPS, 435.00 MiB/s [2024-09-29 21:45:29.977020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:11.369 [2024-09-29 21:45:30.180157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:11.369 [2024-09-29 21:45:30.180478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.369 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.628 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.628 "name": "raid_bdev1", 00:14:11.628 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:11.628 "strip_size_kb": 0, 00:14:11.628 "state": "online", 00:14:11.628 "raid_level": "raid1", 00:14:11.628 "superblock": false, 00:14:11.628 "num_base_bdevs": 4, 00:14:11.628 "num_base_bdevs_discovered": 4, 00:14:11.628 "num_base_bdevs_operational": 4, 00:14:11.628 "process": { 00:14:11.628 "type": "rebuild", 00:14:11.628 "target": "spare", 00:14:11.628 "progress": { 00:14:11.628 "blocks": 10240, 00:14:11.628 "percent": 15 00:14:11.628 } 00:14:11.628 }, 00:14:11.628 "base_bdevs_list": [ 00:14:11.628 { 00:14:11.628 "name": "spare", 00:14:11.628 "uuid": "bba21375-a37d-59fd-8f72-2b757f90f28a", 00:14:11.628 "is_configured": true, 00:14:11.628 "data_offset": 0, 00:14:11.628 "data_size": 65536 00:14:11.628 }, 00:14:11.628 { 00:14:11.628 "name": "BaseBdev2", 00:14:11.628 "uuid": "1bcb531c-26eb-5aaf-b9e7-7aa4feb5ece9", 00:14:11.628 "is_configured": true, 00:14:11.628 "data_offset": 0, 00:14:11.628 "data_size": 65536 00:14:11.628 }, 00:14:11.628 { 00:14:11.628 "name": "BaseBdev3", 00:14:11.628 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:11.628 "is_configured": true, 00:14:11.628 "data_offset": 0, 00:14:11.628 "data_size": 65536 00:14:11.628 }, 00:14:11.628 { 00:14:11.628 "name": "BaseBdev4", 00:14:11.628 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:11.628 "is_configured": true, 00:14:11.628 "data_offset": 0, 00:14:11.628 "data_size": 65536 00:14:11.628 } 00:14:11.628 ] 00:14:11.628 }' 00:14:11.628 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.628 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.628 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.628 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.628 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:11.628 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.628 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.628 [2024-09-29 21:45:30.460017] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.887 [2024-09-29 21:45:30.620644] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:11.887 [2024-09-29 21:45:30.629496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.887 [2024-09-29 21:45:30.629544] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.887 [2024-09-29 21:45:30.629558] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:11.887 [2024-09-29 21:45:30.657185] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.887 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.887 "name": "raid_bdev1", 00:14:11.887 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:11.887 "strip_size_kb": 0, 00:14:11.887 "state": "online", 00:14:11.887 "raid_level": "raid1", 00:14:11.887 "superblock": false, 00:14:11.887 "num_base_bdevs": 4, 00:14:11.887 "num_base_bdevs_discovered": 3, 00:14:11.887 "num_base_bdevs_operational": 3, 00:14:11.887 "base_bdevs_list": [ 00:14:11.887 { 00:14:11.887 "name": null, 00:14:11.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.887 "is_configured": false, 00:14:11.887 "data_offset": 0, 00:14:11.887 "data_size": 65536 00:14:11.887 }, 00:14:11.887 { 00:14:11.887 "name": "BaseBdev2", 00:14:11.887 "uuid": "1bcb531c-26eb-5aaf-b9e7-7aa4feb5ece9", 00:14:11.887 "is_configured": true, 00:14:11.888 "data_offset": 0, 00:14:11.888 "data_size": 65536 00:14:11.888 }, 00:14:11.888 { 00:14:11.888 "name": "BaseBdev3", 00:14:11.888 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:11.888 "is_configured": true, 00:14:11.888 "data_offset": 0, 00:14:11.888 "data_size": 65536 00:14:11.888 }, 00:14:11.888 { 00:14:11.888 "name": "BaseBdev4", 00:14:11.888 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:11.888 "is_configured": true, 00:14:11.888 "data_offset": 0, 00:14:11.888 "data_size": 65536 00:14:11.888 } 00:14:11.888 ] 00:14:11.888 }' 00:14:11.888 21:45:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.888 21:45:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.147 123.00 IOPS, 369.00 MiB/s 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.147 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.147 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.147 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.147 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.147 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.147 21:45:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.147 21:45:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.147 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.147 21:45:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.406 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.406 "name": "raid_bdev1", 00:14:12.406 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:12.406 "strip_size_kb": 0, 00:14:12.406 "state": "online", 00:14:12.406 "raid_level": "raid1", 00:14:12.406 "superblock": false, 00:14:12.406 "num_base_bdevs": 4, 00:14:12.406 "num_base_bdevs_discovered": 3, 00:14:12.406 "num_base_bdevs_operational": 3, 00:14:12.406 "base_bdevs_list": [ 00:14:12.406 { 00:14:12.406 "name": null, 00:14:12.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.406 "is_configured": false, 00:14:12.406 "data_offset": 0, 00:14:12.406 "data_size": 65536 00:14:12.406 }, 00:14:12.406 { 00:14:12.406 "name": "BaseBdev2", 00:14:12.406 "uuid": "1bcb531c-26eb-5aaf-b9e7-7aa4feb5ece9", 00:14:12.406 "is_configured": true, 00:14:12.406 "data_offset": 0, 00:14:12.406 "data_size": 65536 00:14:12.406 }, 00:14:12.406 { 00:14:12.406 "name": "BaseBdev3", 00:14:12.407 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:12.407 "is_configured": true, 00:14:12.407 "data_offset": 0, 00:14:12.407 "data_size": 65536 00:14:12.407 }, 00:14:12.407 { 00:14:12.407 "name": "BaseBdev4", 00:14:12.407 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:12.407 "is_configured": true, 00:14:12.407 "data_offset": 0, 00:14:12.407 "data_size": 65536 00:14:12.407 } 00:14:12.407 ] 00:14:12.407 }' 00:14:12.407 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.407 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.407 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.407 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.407 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:12.407 21:45:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.407 21:45:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.407 [2024-09-29 21:45:31.257276] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.407 21:45:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.407 21:45:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:12.407 [2024-09-29 21:45:31.295339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:12.407 [2024-09-29 21:45:31.297184] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.670 [2024-09-29 21:45:31.398680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:12.670 [2024-09-29 21:45:31.399255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:12.670 [2024-09-29 21:45:31.528735] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:12.670 [2024-09-29 21:45:31.529161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:12.930 [2024-09-29 21:45:31.797180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:12.930 [2024-09-29 21:45:31.797754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:13.189 150.67 IOPS, 452.00 MiB/s [2024-09-29 21:45:32.030153] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:13.448 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.448 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.449 "name": "raid_bdev1", 00:14:13.449 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:13.449 "strip_size_kb": 0, 00:14:13.449 "state": "online", 00:14:13.449 "raid_level": "raid1", 00:14:13.449 "superblock": false, 00:14:13.449 "num_base_bdevs": 4, 00:14:13.449 "num_base_bdevs_discovered": 4, 00:14:13.449 "num_base_bdevs_operational": 4, 00:14:13.449 "process": { 00:14:13.449 "type": "rebuild", 00:14:13.449 "target": "spare", 00:14:13.449 "progress": { 00:14:13.449 "blocks": 12288, 00:14:13.449 "percent": 18 00:14:13.449 } 00:14:13.449 }, 00:14:13.449 "base_bdevs_list": [ 00:14:13.449 { 00:14:13.449 "name": "spare", 00:14:13.449 "uuid": "bba21375-a37d-59fd-8f72-2b757f90f28a", 00:14:13.449 "is_configured": true, 00:14:13.449 "data_offset": 0, 00:14:13.449 "data_size": 65536 00:14:13.449 }, 00:14:13.449 { 00:14:13.449 "name": "BaseBdev2", 00:14:13.449 "uuid": "1bcb531c-26eb-5aaf-b9e7-7aa4feb5ece9", 00:14:13.449 "is_configured": true, 00:14:13.449 "data_offset": 0, 00:14:13.449 "data_size": 65536 00:14:13.449 }, 00:14:13.449 { 00:14:13.449 "name": "BaseBdev3", 00:14:13.449 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:13.449 "is_configured": true, 00:14:13.449 "data_offset": 0, 00:14:13.449 "data_size": 65536 00:14:13.449 }, 00:14:13.449 { 00:14:13.449 "name": "BaseBdev4", 00:14:13.449 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:13.449 "is_configured": true, 00:14:13.449 "data_offset": 0, 00:14:13.449 "data_size": 65536 00:14:13.449 } 00:14:13.449 ] 00:14:13.449 }' 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.449 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.709 [2024-09-29 21:45:32.434099] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.709 [2024-09-29 21:45:32.463039] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:13.709 [2024-09-29 21:45:32.463317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:13.709 [2024-09-29 21:45:32.565926] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:13.709 [2024-09-29 21:45:32.565954] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:13.709 [2024-09-29 21:45:32.566003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:13.709 [2024-09-29 21:45:32.578934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.709 "name": "raid_bdev1", 00:14:13.709 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:13.709 "strip_size_kb": 0, 00:14:13.709 "state": "online", 00:14:13.709 "raid_level": "raid1", 00:14:13.709 "superblock": false, 00:14:13.709 "num_base_bdevs": 4, 00:14:13.709 "num_base_bdevs_discovered": 3, 00:14:13.709 "num_base_bdevs_operational": 3, 00:14:13.709 "process": { 00:14:13.709 "type": "rebuild", 00:14:13.709 "target": "spare", 00:14:13.709 "progress": { 00:14:13.709 "blocks": 16384, 00:14:13.709 "percent": 25 00:14:13.709 } 00:14:13.709 }, 00:14:13.709 "base_bdevs_list": [ 00:14:13.709 { 00:14:13.709 "name": "spare", 00:14:13.709 "uuid": "bba21375-a37d-59fd-8f72-2b757f90f28a", 00:14:13.709 "is_configured": true, 00:14:13.709 "data_offset": 0, 00:14:13.709 "data_size": 65536 00:14:13.709 }, 00:14:13.709 { 00:14:13.709 "name": null, 00:14:13.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.709 "is_configured": false, 00:14:13.709 "data_offset": 0, 00:14:13.709 "data_size": 65536 00:14:13.709 }, 00:14:13.709 { 00:14:13.709 "name": "BaseBdev3", 00:14:13.709 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:13.709 "is_configured": true, 00:14:13.709 "data_offset": 0, 00:14:13.709 "data_size": 65536 00:14:13.709 }, 00:14:13.709 { 00:14:13.709 "name": "BaseBdev4", 00:14:13.709 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:13.709 "is_configured": true, 00:14:13.709 "data_offset": 0, 00:14:13.709 "data_size": 65536 00:14:13.709 } 00:14:13.709 ] 00:14:13.709 }' 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.709 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.969 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.969 "name": "raid_bdev1", 00:14:13.969 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:13.969 "strip_size_kb": 0, 00:14:13.969 "state": "online", 00:14:13.969 "raid_level": "raid1", 00:14:13.969 "superblock": false, 00:14:13.969 "num_base_bdevs": 4, 00:14:13.969 "num_base_bdevs_discovered": 3, 00:14:13.970 "num_base_bdevs_operational": 3, 00:14:13.970 "process": { 00:14:13.970 "type": "rebuild", 00:14:13.970 "target": "spare", 00:14:13.970 "progress": { 00:14:13.970 "blocks": 18432, 00:14:13.970 "percent": 28 00:14:13.970 } 00:14:13.970 }, 00:14:13.970 "base_bdevs_list": [ 00:14:13.970 { 00:14:13.970 "name": "spare", 00:14:13.970 "uuid": "bba21375-a37d-59fd-8f72-2b757f90f28a", 00:14:13.970 "is_configured": true, 00:14:13.970 "data_offset": 0, 00:14:13.970 "data_size": 65536 00:14:13.970 }, 00:14:13.970 { 00:14:13.970 "name": null, 00:14:13.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.970 "is_configured": false, 00:14:13.970 "data_offset": 0, 00:14:13.970 "data_size": 65536 00:14:13.970 }, 00:14:13.970 { 00:14:13.970 "name": "BaseBdev3", 00:14:13.970 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:13.970 "is_configured": true, 00:14:13.970 "data_offset": 0, 00:14:13.970 "data_size": 65536 00:14:13.970 }, 00:14:13.970 { 00:14:13.970 "name": "BaseBdev4", 00:14:13.970 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:13.970 "is_configured": true, 00:14:13.970 "data_offset": 0, 00:14:13.970 "data_size": 65536 00:14:13.970 } 00:14:13.970 ] 00:14:13.970 }' 00:14:13.970 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.970 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.970 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.970 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.970 21:45:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:13.970 133.00 IOPS, 399.00 MiB/s [2024-09-29 21:45:32.932232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:13.970 [2024-09-29 21:45:32.932456] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:14.540 [2024-09-29 21:45:33.242332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.109 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.109 "name": "raid_bdev1", 00:14:15.109 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:15.109 "strip_size_kb": 0, 00:14:15.109 "state": "online", 00:14:15.109 "raid_level": "raid1", 00:14:15.109 "superblock": false, 00:14:15.109 "num_base_bdevs": 4, 00:14:15.109 "num_base_bdevs_discovered": 3, 00:14:15.109 "num_base_bdevs_operational": 3, 00:14:15.109 "process": { 00:14:15.109 "type": "rebuild", 00:14:15.109 "target": "spare", 00:14:15.109 "progress": { 00:14:15.109 "blocks": 34816, 00:14:15.109 "percent": 53 00:14:15.109 } 00:14:15.109 }, 00:14:15.109 "base_bdevs_list": [ 00:14:15.109 { 00:14:15.109 "name": "spare", 00:14:15.109 "uuid": "bba21375-a37d-59fd-8f72-2b757f90f28a", 00:14:15.109 "is_configured": true, 00:14:15.109 "data_offset": 0, 00:14:15.109 "data_size": 65536 00:14:15.109 }, 00:14:15.109 { 00:14:15.109 "name": null, 00:14:15.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.109 "is_configured": false, 00:14:15.109 "data_offset": 0, 00:14:15.109 "data_size": 65536 00:14:15.109 }, 00:14:15.109 { 00:14:15.109 "name": "BaseBdev3", 00:14:15.109 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:15.109 "is_configured": true, 00:14:15.109 "data_offset": 0, 00:14:15.109 "data_size": 65536 00:14:15.109 }, 00:14:15.109 { 00:14:15.109 "name": "BaseBdev4", 00:14:15.109 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:15.109 "is_configured": true, 00:14:15.109 "data_offset": 0, 00:14:15.109 "data_size": 65536 00:14:15.109 } 00:14:15.110 ] 00:14:15.110 }' 00:14:15.110 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.110 116.20 IOPS, 348.60 MiB/s 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.110 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.110 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.110 21:45:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.679 [2024-09-29 21:45:34.491845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:16.200 103.67 IOPS, 311.00 MiB/s 21:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.200 21:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.200 21:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.200 21:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.200 21:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.200 21:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.200 21:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.200 21:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.200 21:45:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.200 21:45:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.200 21:45:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.200 21:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.200 "name": "raid_bdev1", 00:14:16.200 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:16.200 "strip_size_kb": 0, 00:14:16.200 "state": "online", 00:14:16.200 "raid_level": "raid1", 00:14:16.200 "superblock": false, 00:14:16.200 "num_base_bdevs": 4, 00:14:16.200 "num_base_bdevs_discovered": 3, 00:14:16.200 "num_base_bdevs_operational": 3, 00:14:16.200 "process": { 00:14:16.200 "type": "rebuild", 00:14:16.200 "target": "spare", 00:14:16.200 "progress": { 00:14:16.200 "blocks": 55296, 00:14:16.200 "percent": 84 00:14:16.200 } 00:14:16.200 }, 00:14:16.200 "base_bdevs_list": [ 00:14:16.200 { 00:14:16.200 "name": "spare", 00:14:16.200 "uuid": "bba21375-a37d-59fd-8f72-2b757f90f28a", 00:14:16.200 "is_configured": true, 00:14:16.200 "data_offset": 0, 00:14:16.200 "data_size": 65536 00:14:16.200 }, 00:14:16.200 { 00:14:16.200 "name": null, 00:14:16.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.200 "is_configured": false, 00:14:16.200 "data_offset": 0, 00:14:16.200 "data_size": 65536 00:14:16.200 }, 00:14:16.200 { 00:14:16.200 "name": "BaseBdev3", 00:14:16.200 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:16.200 "is_configured": true, 00:14:16.200 "data_offset": 0, 00:14:16.200 "data_size": 65536 00:14:16.200 }, 00:14:16.200 { 00:14:16.200 "name": "BaseBdev4", 00:14:16.200 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:16.200 "is_configured": true, 00:14:16.200 "data_offset": 0, 00:14:16.200 "data_size": 65536 00:14:16.200 } 00:14:16.200 ] 00:14:16.200 }' 00:14:16.200 21:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.200 [2024-09-29 21:45:35.049010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:16.200 21:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.200 21:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.200 21:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.200 21:45:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.460 [2024-09-29 21:45:35.255081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:16.720 [2024-09-29 21:45:35.683107] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:16.980 [2024-09-29 21:45:35.782927] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:16.980 [2024-09-29 21:45:35.784322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.240 93.71 IOPS, 281.14 MiB/s 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.240 "name": "raid_bdev1", 00:14:17.240 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:17.240 "strip_size_kb": 0, 00:14:17.240 "state": "online", 00:14:17.240 "raid_level": "raid1", 00:14:17.240 "superblock": false, 00:14:17.240 "num_base_bdevs": 4, 00:14:17.240 "num_base_bdevs_discovered": 3, 00:14:17.240 "num_base_bdevs_operational": 3, 00:14:17.240 "base_bdevs_list": [ 00:14:17.240 { 00:14:17.240 "name": "spare", 00:14:17.240 "uuid": "bba21375-a37d-59fd-8f72-2b757f90f28a", 00:14:17.240 "is_configured": true, 00:14:17.240 "data_offset": 0, 00:14:17.240 "data_size": 65536 00:14:17.240 }, 00:14:17.240 { 00:14:17.240 "name": null, 00:14:17.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.240 "is_configured": false, 00:14:17.240 "data_offset": 0, 00:14:17.240 "data_size": 65536 00:14:17.240 }, 00:14:17.240 { 00:14:17.240 "name": "BaseBdev3", 00:14:17.240 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:17.240 "is_configured": true, 00:14:17.240 "data_offset": 0, 00:14:17.240 "data_size": 65536 00:14:17.240 }, 00:14:17.240 { 00:14:17.240 "name": "BaseBdev4", 00:14:17.240 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:17.240 "is_configured": true, 00:14:17.240 "data_offset": 0, 00:14:17.240 "data_size": 65536 00:14:17.240 } 00:14:17.240 ] 00:14:17.240 }' 00:14:17.240 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.500 "name": "raid_bdev1", 00:14:17.500 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:17.500 "strip_size_kb": 0, 00:14:17.500 "state": "online", 00:14:17.500 "raid_level": "raid1", 00:14:17.500 "superblock": false, 00:14:17.500 "num_base_bdevs": 4, 00:14:17.500 "num_base_bdevs_discovered": 3, 00:14:17.500 "num_base_bdevs_operational": 3, 00:14:17.500 "base_bdevs_list": [ 00:14:17.500 { 00:14:17.500 "name": "spare", 00:14:17.500 "uuid": "bba21375-a37d-59fd-8f72-2b757f90f28a", 00:14:17.500 "is_configured": true, 00:14:17.500 "data_offset": 0, 00:14:17.500 "data_size": 65536 00:14:17.500 }, 00:14:17.500 { 00:14:17.500 "name": null, 00:14:17.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.500 "is_configured": false, 00:14:17.500 "data_offset": 0, 00:14:17.500 "data_size": 65536 00:14:17.500 }, 00:14:17.500 { 00:14:17.500 "name": "BaseBdev3", 00:14:17.500 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:17.500 "is_configured": true, 00:14:17.500 "data_offset": 0, 00:14:17.500 "data_size": 65536 00:14:17.500 }, 00:14:17.500 { 00:14:17.500 "name": "BaseBdev4", 00:14:17.500 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:17.500 "is_configured": true, 00:14:17.500 "data_offset": 0, 00:14:17.500 "data_size": 65536 00:14:17.500 } 00:14:17.500 ] 00:14:17.500 }' 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.500 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.760 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.761 "name": "raid_bdev1", 00:14:17.761 "uuid": "408d426b-5284-47f5-b9a9-e49dccb51420", 00:14:17.761 "strip_size_kb": 0, 00:14:17.761 "state": "online", 00:14:17.761 "raid_level": "raid1", 00:14:17.761 "superblock": false, 00:14:17.761 "num_base_bdevs": 4, 00:14:17.761 "num_base_bdevs_discovered": 3, 00:14:17.761 "num_base_bdevs_operational": 3, 00:14:17.761 "base_bdevs_list": [ 00:14:17.761 { 00:14:17.761 "name": "spare", 00:14:17.761 "uuid": "bba21375-a37d-59fd-8f72-2b757f90f28a", 00:14:17.761 "is_configured": true, 00:14:17.761 "data_offset": 0, 00:14:17.761 "data_size": 65536 00:14:17.761 }, 00:14:17.761 { 00:14:17.761 "name": null, 00:14:17.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.761 "is_configured": false, 00:14:17.761 "data_offset": 0, 00:14:17.761 "data_size": 65536 00:14:17.761 }, 00:14:17.761 { 00:14:17.761 "name": "BaseBdev3", 00:14:17.761 "uuid": "5cd656bf-c94d-5477-9919-20cc2b4c772a", 00:14:17.761 "is_configured": true, 00:14:17.761 "data_offset": 0, 00:14:17.761 "data_size": 65536 00:14:17.761 }, 00:14:17.761 { 00:14:17.761 "name": "BaseBdev4", 00:14:17.761 "uuid": "4556d319-e362-58a9-9c4e-062927a7e7fb", 00:14:17.761 "is_configured": true, 00:14:17.761 "data_offset": 0, 00:14:17.761 "data_size": 65536 00:14:17.761 } 00:14:17.761 ] 00:14:17.761 }' 00:14:17.761 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.761 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.020 [2024-09-29 21:45:36.841971] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:18.020 [2024-09-29 21:45:36.842015] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.020 00:14:18.020 Latency(us) 00:14:18.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.020 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:18.020 raid_bdev1 : 7.98 86.82 260.47 0.00 0.00 16507.45 316.59 118136.51 00:14:18.020 =================================================================================================================== 00:14:18.020 Total : 86.82 260.47 0.00 0.00 16507.45 316.59 118136.51 00:14:18.020 [2024-09-29 21:45:36.897418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.020 [2024-09-29 21:45:36.897496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.020 [2024-09-29 21:45:36.897598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.020 [2024-09-29 21:45:36.897648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:18.020 { 00:14:18.020 "results": [ 00:14:18.020 { 00:14:18.020 "job": "raid_bdev1", 00:14:18.020 "core_mask": "0x1", 00:14:18.020 "workload": "randrw", 00:14:18.020 "percentage": 50, 00:14:18.020 "status": "finished", 00:14:18.020 "queue_depth": 2, 00:14:18.020 "io_size": 3145728, 00:14:18.020 "runtime": 7.981705, 00:14:18.020 "iops": 86.82355461646353, 00:14:18.020 "mibps": 260.4706638493906, 00:14:18.020 "io_failed": 0, 00:14:18.020 "io_timeout": 0, 00:14:18.020 "avg_latency_us": 16507.45014713574, 00:14:18.020 "min_latency_us": 316.5903930131004, 00:14:18.020 "max_latency_us": 118136.51004366812 00:14:18.020 } 00:14:18.020 ], 00:14:18.020 "core_count": 1 00:14:18.020 } 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.020 21:45:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:18.280 /dev/nbd0 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.280 1+0 records in 00:14:18.280 1+0 records out 00:14:18.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424999 s, 9.6 MB/s 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:18.280 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.281 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:18.281 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.281 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:18.281 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.281 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.281 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:18.541 /dev/nbd1 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.541 1+0 records in 00:14:18.541 1+0 records out 00:14:18.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427168 s, 9.6 MB/s 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.541 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:18.801 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:18.801 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.801 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:18.801 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.801 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:18.801 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.801 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.062 21:45:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:19.321 /dev/nbd1 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.321 1+0 records in 00:14:19.321 1+0 records out 00:14:19.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384969 s, 10.6 MB/s 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.321 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.580 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78803 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78803 ']' 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78803 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78803 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78803' 00:14:19.840 killing process with pid 78803 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78803 00:14:19.840 Received shutdown signal, test time was about 9.773543 seconds 00:14:19.840 00:14:19.840 Latency(us) 00:14:19.840 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.840 =================================================================================================================== 00:14:19.840 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:19.840 [2024-09-29 21:45:38.666264] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.840 21:45:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78803 00:14:20.100 [2024-09-29 21:45:39.062226] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:21.482 00:14:21.482 real 0m13.229s 00:14:21.482 user 0m16.572s 00:14:21.482 sys 0m1.876s 00:14:21.482 ************************************ 00:14:21.482 END TEST raid_rebuild_test_io 00:14:21.482 ************************************ 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 21:45:40 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:21.482 21:45:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:21.482 21:45:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.482 21:45:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 ************************************ 00:14:21.482 START TEST raid_rebuild_test_sb_io 00:14:21.482 ************************************ 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79212 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79212 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:21.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79212 ']' 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.482 21:45:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.743 [2024-09-29 21:45:40.492850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:21.743 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:21.743 Zero copy mechanism will not be used. 00:14:21.743 [2024-09-29 21:45:40.493540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79212 ] 00:14:21.743 [2024-09-29 21:45:40.656193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.003 [2024-09-29 21:45:40.855698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.263 [2024-09-29 21:45:41.044069] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.263 [2024-09-29 21:45:41.044099] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.524 BaseBdev1_malloc 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.524 [2024-09-29 21:45:41.350973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:22.524 [2024-09-29 21:45:41.351061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.524 [2024-09-29 21:45:41.351084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:22.524 [2024-09-29 21:45:41.351097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.524 [2024-09-29 21:45:41.353090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.524 [2024-09-29 21:45:41.353129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:22.524 BaseBdev1 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.524 BaseBdev2_malloc 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.524 [2024-09-29 21:45:41.413598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:22.524 [2024-09-29 21:45:41.413732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.524 [2024-09-29 21:45:41.413755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:22.524 [2024-09-29 21:45:41.413765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.524 [2024-09-29 21:45:41.415651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.524 [2024-09-29 21:45:41.415689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:22.524 BaseBdev2 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.524 BaseBdev3_malloc 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.524 [2024-09-29 21:45:41.465703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:22.524 [2024-09-29 21:45:41.465756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.524 [2024-09-29 21:45:41.465777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:22.524 [2024-09-29 21:45:41.465788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.524 [2024-09-29 21:45:41.467669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.524 [2024-09-29 21:45:41.467708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:22.524 BaseBdev3 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.524 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.785 BaseBdev4_malloc 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.785 [2024-09-29 21:45:41.518043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:22.785 [2024-09-29 21:45:41.518092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.785 [2024-09-29 21:45:41.518111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:22.785 [2024-09-29 21:45:41.518121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.785 [2024-09-29 21:45:41.520080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.785 [2024-09-29 21:45:41.520196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:22.785 BaseBdev4 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.785 spare_malloc 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.785 spare_delay 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.785 [2024-09-29 21:45:41.581983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:22.785 [2024-09-29 21:45:41.582106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.785 [2024-09-29 21:45:41.582128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:22.785 [2024-09-29 21:45:41.582139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.785 [2024-09-29 21:45:41.583986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.785 [2024-09-29 21:45:41.584025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:22.785 spare 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.785 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.785 [2024-09-29 21:45:41.594024] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.786 [2024-09-29 21:45:41.595649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.786 [2024-09-29 21:45:41.595714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.786 [2024-09-29 21:45:41.595763] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:22.786 [2024-09-29 21:45:41.595926] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:22.786 [2024-09-29 21:45:41.595939] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:22.786 [2024-09-29 21:45:41.596183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:22.786 [2024-09-29 21:45:41.596331] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:22.786 [2024-09-29 21:45:41.596341] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:22.786 [2024-09-29 21:45:41.596470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.786 "name": "raid_bdev1", 00:14:22.786 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:22.786 "strip_size_kb": 0, 00:14:22.786 "state": "online", 00:14:22.786 "raid_level": "raid1", 00:14:22.786 "superblock": true, 00:14:22.786 "num_base_bdevs": 4, 00:14:22.786 "num_base_bdevs_discovered": 4, 00:14:22.786 "num_base_bdevs_operational": 4, 00:14:22.786 "base_bdevs_list": [ 00:14:22.786 { 00:14:22.786 "name": "BaseBdev1", 00:14:22.786 "uuid": "d56e98c5-368c-5f5f-b241-d74902176e01", 00:14:22.786 "is_configured": true, 00:14:22.786 "data_offset": 2048, 00:14:22.786 "data_size": 63488 00:14:22.786 }, 00:14:22.786 { 00:14:22.786 "name": "BaseBdev2", 00:14:22.786 "uuid": "e95493f1-cd05-5670-b0dc-6e35904fa225", 00:14:22.786 "is_configured": true, 00:14:22.786 "data_offset": 2048, 00:14:22.786 "data_size": 63488 00:14:22.786 }, 00:14:22.786 { 00:14:22.786 "name": "BaseBdev3", 00:14:22.786 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:22.786 "is_configured": true, 00:14:22.786 "data_offset": 2048, 00:14:22.786 "data_size": 63488 00:14:22.786 }, 00:14:22.786 { 00:14:22.786 "name": "BaseBdev4", 00:14:22.786 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:22.786 "is_configured": true, 00:14:22.786 "data_offset": 2048, 00:14:22.786 "data_size": 63488 00:14:22.786 } 00:14:22.786 ] 00:14:22.786 }' 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.786 21:45:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.356 [2024-09-29 21:45:42.097387] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.356 [2024-09-29 21:45:42.184922] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.356 "name": "raid_bdev1", 00:14:23.356 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:23.356 "strip_size_kb": 0, 00:14:23.356 "state": "online", 00:14:23.356 "raid_level": "raid1", 00:14:23.356 "superblock": true, 00:14:23.356 "num_base_bdevs": 4, 00:14:23.356 "num_base_bdevs_discovered": 3, 00:14:23.356 "num_base_bdevs_operational": 3, 00:14:23.356 "base_bdevs_list": [ 00:14:23.356 { 00:14:23.356 "name": null, 00:14:23.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.356 "is_configured": false, 00:14:23.356 "data_offset": 0, 00:14:23.356 "data_size": 63488 00:14:23.356 }, 00:14:23.356 { 00:14:23.356 "name": "BaseBdev2", 00:14:23.356 "uuid": "e95493f1-cd05-5670-b0dc-6e35904fa225", 00:14:23.356 "is_configured": true, 00:14:23.356 "data_offset": 2048, 00:14:23.356 "data_size": 63488 00:14:23.356 }, 00:14:23.356 { 00:14:23.356 "name": "BaseBdev3", 00:14:23.356 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:23.356 "is_configured": true, 00:14:23.356 "data_offset": 2048, 00:14:23.356 "data_size": 63488 00:14:23.356 }, 00:14:23.356 { 00:14:23.356 "name": "BaseBdev4", 00:14:23.356 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:23.356 "is_configured": true, 00:14:23.356 "data_offset": 2048, 00:14:23.356 "data_size": 63488 00:14:23.356 } 00:14:23.356 ] 00:14:23.356 }' 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.356 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.356 [2024-09-29 21:45:42.279408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:23.356 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:23.356 Zero copy mechanism will not be used. 00:14:23.356 Running I/O for 60 seconds... 00:14:23.926 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:23.926 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.926 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.926 [2024-09-29 21:45:42.617455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.926 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.926 21:45:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:23.926 [2024-09-29 21:45:42.668855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:23.926 [2024-09-29 21:45:42.670706] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.926 [2024-09-29 21:45:42.773047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:23.926 [2024-09-29 21:45:42.773597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:24.186 [2024-09-29 21:45:42.981076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:24.186 [2024-09-29 21:45:42.981275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:24.446 155.00 IOPS, 465.00 MiB/s [2024-09-29 21:45:43.328892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:24.707 [2024-09-29 21:45:43.539649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:24.707 [2024-09-29 21:45:43.540019] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.707 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.967 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.967 "name": "raid_bdev1", 00:14:24.967 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:24.967 "strip_size_kb": 0, 00:14:24.967 "state": "online", 00:14:24.967 "raid_level": "raid1", 00:14:24.967 "superblock": true, 00:14:24.967 "num_base_bdevs": 4, 00:14:24.967 "num_base_bdevs_discovered": 4, 00:14:24.967 "num_base_bdevs_operational": 4, 00:14:24.967 "process": { 00:14:24.967 "type": "rebuild", 00:14:24.967 "target": "spare", 00:14:24.967 "progress": { 00:14:24.967 "blocks": 10240, 00:14:24.967 "percent": 16 00:14:24.967 } 00:14:24.967 }, 00:14:24.967 "base_bdevs_list": [ 00:14:24.967 { 00:14:24.967 "name": "spare", 00:14:24.967 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:24.967 "is_configured": true, 00:14:24.967 "data_offset": 2048, 00:14:24.967 "data_size": 63488 00:14:24.967 }, 00:14:24.967 { 00:14:24.967 "name": "BaseBdev2", 00:14:24.967 "uuid": "e95493f1-cd05-5670-b0dc-6e35904fa225", 00:14:24.967 "is_configured": true, 00:14:24.967 "data_offset": 2048, 00:14:24.967 "data_size": 63488 00:14:24.967 }, 00:14:24.967 { 00:14:24.967 "name": "BaseBdev3", 00:14:24.967 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:24.967 "is_configured": true, 00:14:24.967 "data_offset": 2048, 00:14:24.967 "data_size": 63488 00:14:24.967 }, 00:14:24.967 { 00:14:24.967 "name": "BaseBdev4", 00:14:24.967 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:24.967 "is_configured": true, 00:14:24.967 "data_offset": 2048, 00:14:24.967 "data_size": 63488 00:14:24.967 } 00:14:24.967 ] 00:14:24.967 }' 00:14:24.967 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.967 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.967 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.967 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.967 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:24.967 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.967 21:45:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.967 [2024-09-29 21:45:43.804239] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.967 [2024-09-29 21:45:43.893526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:25.228 [2024-09-29 21:45:44.007109] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:25.228 [2024-09-29 21:45:44.017709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.228 [2024-09-29 21:45:44.017755] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:25.228 [2024-09-29 21:45:44.017770] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:25.228 [2024-09-29 21:45:44.057069] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.228 "name": "raid_bdev1", 00:14:25.228 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:25.228 "strip_size_kb": 0, 00:14:25.228 "state": "online", 00:14:25.228 "raid_level": "raid1", 00:14:25.228 "superblock": true, 00:14:25.228 "num_base_bdevs": 4, 00:14:25.228 "num_base_bdevs_discovered": 3, 00:14:25.228 "num_base_bdevs_operational": 3, 00:14:25.228 "base_bdevs_list": [ 00:14:25.228 { 00:14:25.228 "name": null, 00:14:25.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.228 "is_configured": false, 00:14:25.228 "data_offset": 0, 00:14:25.228 "data_size": 63488 00:14:25.228 }, 00:14:25.228 { 00:14:25.228 "name": "BaseBdev2", 00:14:25.228 "uuid": "e95493f1-cd05-5670-b0dc-6e35904fa225", 00:14:25.228 "is_configured": true, 00:14:25.228 "data_offset": 2048, 00:14:25.228 "data_size": 63488 00:14:25.228 }, 00:14:25.228 { 00:14:25.228 "name": "BaseBdev3", 00:14:25.228 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:25.228 "is_configured": true, 00:14:25.228 "data_offset": 2048, 00:14:25.228 "data_size": 63488 00:14:25.228 }, 00:14:25.228 { 00:14:25.228 "name": "BaseBdev4", 00:14:25.228 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:25.228 "is_configured": true, 00:14:25.228 "data_offset": 2048, 00:14:25.228 "data_size": 63488 00:14:25.228 } 00:14:25.228 ] 00:14:25.228 }' 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.228 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.769 146.50 IOPS, 439.50 MiB/s 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.769 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.769 "name": "raid_bdev1", 00:14:25.769 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:25.769 "strip_size_kb": 0, 00:14:25.769 "state": "online", 00:14:25.769 "raid_level": "raid1", 00:14:25.769 "superblock": true, 00:14:25.769 "num_base_bdevs": 4, 00:14:25.769 "num_base_bdevs_discovered": 3, 00:14:25.769 "num_base_bdevs_operational": 3, 00:14:25.769 "base_bdevs_list": [ 00:14:25.769 { 00:14:25.769 "name": null, 00:14:25.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.769 "is_configured": false, 00:14:25.769 "data_offset": 0, 00:14:25.770 "data_size": 63488 00:14:25.770 }, 00:14:25.770 { 00:14:25.770 "name": "BaseBdev2", 00:14:25.770 "uuid": "e95493f1-cd05-5670-b0dc-6e35904fa225", 00:14:25.770 "is_configured": true, 00:14:25.770 "data_offset": 2048, 00:14:25.770 "data_size": 63488 00:14:25.770 }, 00:14:25.770 { 00:14:25.770 "name": "BaseBdev3", 00:14:25.770 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:25.770 "is_configured": true, 00:14:25.770 "data_offset": 2048, 00:14:25.770 "data_size": 63488 00:14:25.770 }, 00:14:25.770 { 00:14:25.770 "name": "BaseBdev4", 00:14:25.770 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:25.770 "is_configured": true, 00:14:25.770 "data_offset": 2048, 00:14:25.770 "data_size": 63488 00:14:25.770 } 00:14:25.770 ] 00:14:25.770 }' 00:14:25.770 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.770 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:25.770 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.770 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:25.770 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:25.770 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.770 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:25.770 [2024-09-29 21:45:44.645256] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:25.770 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.770 21:45:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:25.770 [2024-09-29 21:45:44.719926] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:25.770 [2024-09-29 21:45:44.721713] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.075 [2024-09-29 21:45:44.829571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:26.075 [2024-09-29 21:45:44.829928] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:26.075 [2024-09-29 21:45:44.963808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:26.352 [2024-09-29 21:45:45.197205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:26.352 [2024-09-29 21:45:45.197598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:26.352 167.00 IOPS, 501.00 MiB/s [2024-09-29 21:45:45.330458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:26.629 [2024-09-29 21:45:45.331206] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.890 "name": "raid_bdev1", 00:14:26.890 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:26.890 "strip_size_kb": 0, 00:14:26.890 "state": "online", 00:14:26.890 "raid_level": "raid1", 00:14:26.890 "superblock": true, 00:14:26.890 "num_base_bdevs": 4, 00:14:26.890 "num_base_bdevs_discovered": 4, 00:14:26.890 "num_base_bdevs_operational": 4, 00:14:26.890 "process": { 00:14:26.890 "type": "rebuild", 00:14:26.890 "target": "spare", 00:14:26.890 "progress": { 00:14:26.890 "blocks": 14336, 00:14:26.890 "percent": 22 00:14:26.890 } 00:14:26.890 }, 00:14:26.890 "base_bdevs_list": [ 00:14:26.890 { 00:14:26.890 "name": "spare", 00:14:26.890 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:26.890 "is_configured": true, 00:14:26.890 "data_offset": 2048, 00:14:26.890 "data_size": 63488 00:14:26.890 }, 00:14:26.890 { 00:14:26.890 "name": "BaseBdev2", 00:14:26.890 "uuid": "e95493f1-cd05-5670-b0dc-6e35904fa225", 00:14:26.890 "is_configured": true, 00:14:26.890 "data_offset": 2048, 00:14:26.890 "data_size": 63488 00:14:26.890 }, 00:14:26.890 { 00:14:26.890 "name": "BaseBdev3", 00:14:26.890 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:26.890 "is_configured": true, 00:14:26.890 "data_offset": 2048, 00:14:26.890 "data_size": 63488 00:14:26.890 }, 00:14:26.890 { 00:14:26.890 "name": "BaseBdev4", 00:14:26.890 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:26.890 "is_configured": true, 00:14:26.890 "data_offset": 2048, 00:14:26.890 "data_size": 63488 00:14:26.890 } 00:14:26.890 ] 00:14:26.890 }' 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.890 [2024-09-29 21:45:45.784116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:26.890 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.890 21:45:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.890 [2024-09-29 21:45:45.869509] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.459 [2024-09-29 21:45:46.199256] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:27.459 [2024-09-29 21:45:46.199358] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.459 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.459 "name": "raid_bdev1", 00:14:27.459 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:27.459 "strip_size_kb": 0, 00:14:27.459 "state": "online", 00:14:27.459 "raid_level": "raid1", 00:14:27.459 "superblock": true, 00:14:27.459 "num_base_bdevs": 4, 00:14:27.459 "num_base_bdevs_discovered": 3, 00:14:27.459 "num_base_bdevs_operational": 3, 00:14:27.459 "process": { 00:14:27.459 "type": "rebuild", 00:14:27.459 "target": "spare", 00:14:27.459 "progress": { 00:14:27.459 "blocks": 18432, 00:14:27.459 "percent": 29 00:14:27.459 } 00:14:27.459 }, 00:14:27.459 "base_bdevs_list": [ 00:14:27.459 { 00:14:27.459 "name": "spare", 00:14:27.459 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:27.459 "is_configured": true, 00:14:27.459 "data_offset": 2048, 00:14:27.459 "data_size": 63488 00:14:27.459 }, 00:14:27.459 { 00:14:27.459 "name": null, 00:14:27.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.459 "is_configured": false, 00:14:27.459 "data_offset": 0, 00:14:27.459 "data_size": 63488 00:14:27.459 }, 00:14:27.459 { 00:14:27.459 "name": "BaseBdev3", 00:14:27.460 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:27.460 "is_configured": true, 00:14:27.460 "data_offset": 2048, 00:14:27.460 "data_size": 63488 00:14:27.460 }, 00:14:27.460 { 00:14:27.460 "name": "BaseBdev4", 00:14:27.460 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:27.460 "is_configured": true, 00:14:27.460 "data_offset": 2048, 00:14:27.460 "data_size": 63488 00:14:27.460 } 00:14:27.460 ] 00:14:27.460 }' 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.460 141.50 IOPS, 424.50 MiB/s 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.460 [2024-09-29 21:45:46.323907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:27.460 [2024-09-29 21:45:46.324341] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=502 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.460 "name": "raid_bdev1", 00:14:27.460 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:27.460 "strip_size_kb": 0, 00:14:27.460 "state": "online", 00:14:27.460 "raid_level": "raid1", 00:14:27.460 "superblock": true, 00:14:27.460 "num_base_bdevs": 4, 00:14:27.460 "num_base_bdevs_discovered": 3, 00:14:27.460 "num_base_bdevs_operational": 3, 00:14:27.460 "process": { 00:14:27.460 "type": "rebuild", 00:14:27.460 "target": "spare", 00:14:27.460 "progress": { 00:14:27.460 "blocks": 20480, 00:14:27.460 "percent": 32 00:14:27.460 } 00:14:27.460 }, 00:14:27.460 "base_bdevs_list": [ 00:14:27.460 { 00:14:27.460 "name": "spare", 00:14:27.460 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:27.460 "is_configured": true, 00:14:27.460 "data_offset": 2048, 00:14:27.460 "data_size": 63488 00:14:27.460 }, 00:14:27.460 { 00:14:27.460 "name": null, 00:14:27.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.460 "is_configured": false, 00:14:27.460 "data_offset": 0, 00:14:27.460 "data_size": 63488 00:14:27.460 }, 00:14:27.460 { 00:14:27.460 "name": "BaseBdev3", 00:14:27.460 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:27.460 "is_configured": true, 00:14:27.460 "data_offset": 2048, 00:14:27.460 "data_size": 63488 00:14:27.460 }, 00:14:27.460 { 00:14:27.460 "name": "BaseBdev4", 00:14:27.460 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:27.460 "is_configured": true, 00:14:27.460 "data_offset": 2048, 00:14:27.460 "data_size": 63488 00:14:27.460 } 00:14:27.460 ] 00:14:27.460 }' 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.460 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.460 [2024-09-29 21:45:46.437390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:27.460 [2024-09-29 21:45:46.437860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:27.718 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.718 21:45:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.978 [2024-09-29 21:45:46.921144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:28.548 124.00 IOPS, 372.00 MiB/s [2024-09-29 21:45:47.369538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:28.548 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.548 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.548 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.548 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.549 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.549 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.549 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.549 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.549 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.549 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.549 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.549 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.549 "name": "raid_bdev1", 00:14:28.549 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:28.549 "strip_size_kb": 0, 00:14:28.549 "state": "online", 00:14:28.549 "raid_level": "raid1", 00:14:28.549 "superblock": true, 00:14:28.549 "num_base_bdevs": 4, 00:14:28.549 "num_base_bdevs_discovered": 3, 00:14:28.549 "num_base_bdevs_operational": 3, 00:14:28.549 "process": { 00:14:28.549 "type": "rebuild", 00:14:28.549 "target": "spare", 00:14:28.549 "progress": { 00:14:28.549 "blocks": 34816, 00:14:28.549 "percent": 54 00:14:28.549 } 00:14:28.549 }, 00:14:28.549 "base_bdevs_list": [ 00:14:28.549 { 00:14:28.549 "name": "spare", 00:14:28.549 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:28.549 "is_configured": true, 00:14:28.549 "data_offset": 2048, 00:14:28.549 "data_size": 63488 00:14:28.549 }, 00:14:28.549 { 00:14:28.549 "name": null, 00:14:28.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.549 "is_configured": false, 00:14:28.549 "data_offset": 0, 00:14:28.549 "data_size": 63488 00:14:28.549 }, 00:14:28.549 { 00:14:28.549 "name": "BaseBdev3", 00:14:28.549 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:28.549 "is_configured": true, 00:14:28.549 "data_offset": 2048, 00:14:28.549 "data_size": 63488 00:14:28.549 }, 00:14:28.549 { 00:14:28.549 "name": "BaseBdev4", 00:14:28.549 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:28.549 "is_configured": true, 00:14:28.549 "data_offset": 2048, 00:14:28.549 "data_size": 63488 00:14:28.549 } 00:14:28.549 ] 00:14:28.549 }' 00:14:28.549 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.808 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.808 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.808 [2024-09-29 21:45:47.598010] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:28.808 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.808 21:45:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.067 [2024-09-29 21:45:47.800311] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:29.067 [2024-09-29 21:45:47.800771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:29.586 110.83 IOPS, 332.50 MiB/s [2024-09-29 21:45:48.451864] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:29.846 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.847 "name": "raid_bdev1", 00:14:29.847 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:29.847 "strip_size_kb": 0, 00:14:29.847 "state": "online", 00:14:29.847 "raid_level": "raid1", 00:14:29.847 "superblock": true, 00:14:29.847 "num_base_bdevs": 4, 00:14:29.847 "num_base_bdevs_discovered": 3, 00:14:29.847 "num_base_bdevs_operational": 3, 00:14:29.847 "process": { 00:14:29.847 "type": "rebuild", 00:14:29.847 "target": "spare", 00:14:29.847 "progress": { 00:14:29.847 "blocks": 53248, 00:14:29.847 "percent": 83 00:14:29.847 } 00:14:29.847 }, 00:14:29.847 "base_bdevs_list": [ 00:14:29.847 { 00:14:29.847 "name": "spare", 00:14:29.847 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:29.847 "is_configured": true, 00:14:29.847 "data_offset": 2048, 00:14:29.847 "data_size": 63488 00:14:29.847 }, 00:14:29.847 { 00:14:29.847 "name": null, 00:14:29.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.847 "is_configured": false, 00:14:29.847 "data_offset": 0, 00:14:29.847 "data_size": 63488 00:14:29.847 }, 00:14:29.847 { 00:14:29.847 "name": "BaseBdev3", 00:14:29.847 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:29.847 "is_configured": true, 00:14:29.847 "data_offset": 2048, 00:14:29.847 "data_size": 63488 00:14:29.847 }, 00:14:29.847 { 00:14:29.847 "name": "BaseBdev4", 00:14:29.847 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:29.847 "is_configured": true, 00:14:29.847 "data_offset": 2048, 00:14:29.847 "data_size": 63488 00:14:29.847 } 00:14:29.847 ] 00:14:29.847 }' 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.847 21:45:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.416 [2024-09-29 21:45:49.099744] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:30.416 [2024-09-29 21:45:49.199542] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:30.416 [2024-09-29 21:45:49.201142] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.985 99.14 IOPS, 297.43 MiB/s 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.985 "name": "raid_bdev1", 00:14:30.985 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:30.985 "strip_size_kb": 0, 00:14:30.985 "state": "online", 00:14:30.985 "raid_level": "raid1", 00:14:30.985 "superblock": true, 00:14:30.985 "num_base_bdevs": 4, 00:14:30.985 "num_base_bdevs_discovered": 3, 00:14:30.985 "num_base_bdevs_operational": 3, 00:14:30.985 "base_bdevs_list": [ 00:14:30.985 { 00:14:30.985 "name": "spare", 00:14:30.985 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:30.985 "is_configured": true, 00:14:30.985 "data_offset": 2048, 00:14:30.985 "data_size": 63488 00:14:30.985 }, 00:14:30.985 { 00:14:30.985 "name": null, 00:14:30.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.985 "is_configured": false, 00:14:30.985 "data_offset": 0, 00:14:30.985 "data_size": 63488 00:14:30.985 }, 00:14:30.985 { 00:14:30.985 "name": "BaseBdev3", 00:14:30.985 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:30.985 "is_configured": true, 00:14:30.985 "data_offset": 2048, 00:14:30.985 "data_size": 63488 00:14:30.985 }, 00:14:30.985 { 00:14:30.985 "name": "BaseBdev4", 00:14:30.985 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:30.985 "is_configured": true, 00:14:30.985 "data_offset": 2048, 00:14:30.985 "data_size": 63488 00:14:30.985 } 00:14:30.985 ] 00:14:30.985 }' 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.985 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.245 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.245 "name": "raid_bdev1", 00:14:31.245 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:31.245 "strip_size_kb": 0, 00:14:31.245 "state": "online", 00:14:31.245 "raid_level": "raid1", 00:14:31.245 "superblock": true, 00:14:31.245 "num_base_bdevs": 4, 00:14:31.245 "num_base_bdevs_discovered": 3, 00:14:31.245 "num_base_bdevs_operational": 3, 00:14:31.245 "base_bdevs_list": [ 00:14:31.245 { 00:14:31.245 "name": "spare", 00:14:31.245 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:31.245 "is_configured": true, 00:14:31.245 "data_offset": 2048, 00:14:31.245 "data_size": 63488 00:14:31.245 }, 00:14:31.245 { 00:14:31.245 "name": null, 00:14:31.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.245 "is_configured": false, 00:14:31.245 "data_offset": 0, 00:14:31.245 "data_size": 63488 00:14:31.245 }, 00:14:31.245 { 00:14:31.245 "name": "BaseBdev3", 00:14:31.245 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:31.245 "is_configured": true, 00:14:31.245 "data_offset": 2048, 00:14:31.245 "data_size": 63488 00:14:31.245 }, 00:14:31.245 { 00:14:31.245 "name": "BaseBdev4", 00:14:31.245 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:31.245 "is_configured": true, 00:14:31.245 "data_offset": 2048, 00:14:31.245 "data_size": 63488 00:14:31.245 } 00:14:31.245 ] 00:14:31.245 }' 00:14:31.245 21:45:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.245 "name": "raid_bdev1", 00:14:31.245 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:31.245 "strip_size_kb": 0, 00:14:31.245 "state": "online", 00:14:31.245 "raid_level": "raid1", 00:14:31.245 "superblock": true, 00:14:31.245 "num_base_bdevs": 4, 00:14:31.245 "num_base_bdevs_discovered": 3, 00:14:31.245 "num_base_bdevs_operational": 3, 00:14:31.245 "base_bdevs_list": [ 00:14:31.245 { 00:14:31.245 "name": "spare", 00:14:31.245 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:31.245 "is_configured": true, 00:14:31.245 "data_offset": 2048, 00:14:31.245 "data_size": 63488 00:14:31.245 }, 00:14:31.245 { 00:14:31.245 "name": null, 00:14:31.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.245 "is_configured": false, 00:14:31.245 "data_offset": 0, 00:14:31.245 "data_size": 63488 00:14:31.245 }, 00:14:31.245 { 00:14:31.245 "name": "BaseBdev3", 00:14:31.245 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:31.245 "is_configured": true, 00:14:31.245 "data_offset": 2048, 00:14:31.245 "data_size": 63488 00:14:31.245 }, 00:14:31.245 { 00:14:31.245 "name": "BaseBdev4", 00:14:31.245 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:31.245 "is_configured": true, 00:14:31.245 "data_offset": 2048, 00:14:31.245 "data_size": 63488 00:14:31.245 } 00:14:31.245 ] 00:14:31.245 }' 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.245 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.765 90.88 IOPS, 272.62 MiB/s 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.765 [2024-09-29 21:45:50.545153] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:31.765 [2024-09-29 21:45:50.545263] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.765 00:14:31.765 Latency(us) 00:14:31.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.765 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:31.765 raid_bdev1 : 8.35 87.93 263.79 0.00 0.00 15860.70 311.22 117220.72 00:14:31.765 =================================================================================================================== 00:14:31.765 Total : 87.93 263.79 0.00 0.00 15860.70 311.22 117220.72 00:14:31.765 [2024-09-29 21:45:50.632647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.765 [2024-09-29 21:45:50.632725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.765 [2024-09-29 21:45:50.632827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.765 [2024-09-29 21:45:50.632877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:31.765 { 00:14:31.765 "results": [ 00:14:31.765 { 00:14:31.765 "job": "raid_bdev1", 00:14:31.765 "core_mask": "0x1", 00:14:31.765 "workload": "randrw", 00:14:31.765 "percentage": 50, 00:14:31.765 "status": "finished", 00:14:31.765 "queue_depth": 2, 00:14:31.765 "io_size": 3145728, 00:14:31.765 "runtime": 8.347645, 00:14:31.765 "iops": 87.92899075128375, 00:14:31.765 "mibps": 263.7869722538512, 00:14:31.765 "io_failed": 0, 00:14:31.765 "io_timeout": 0, 00:14:31.765 "avg_latency_us": 15860.695463036776, 00:14:31.765 "min_latency_us": 311.22445414847164, 00:14:31.765 "max_latency_us": 117220.7231441048 00:14:31.765 } 00:14:31.765 ], 00:14:31.765 "core_count": 1 00:14:31.765 } 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.765 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:32.025 /dev/nbd0 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.025 1+0 records in 00:14:32.025 1+0 records out 00:14:32.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421494 s, 9.7 MB/s 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:32.025 21:45:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:32.285 /dev/nbd1 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.285 1+0 records in 00:14:32.285 1+0 records out 00:14:32.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052268 s, 7.8 MB/s 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:32.285 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:32.545 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:32.545 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.545 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:32.545 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.545 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:32.545 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.545 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:32.806 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:33.067 /dev/nbd1 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.067 1+0 records in 00:14:33.067 1+0 records out 00:14:33.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436685 s, 9.4 MB/s 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.067 21:45:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:33.327 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:33.327 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:33.327 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:33.327 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.327 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.328 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.588 [2024-09-29 21:45:52.382792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:33.588 [2024-09-29 21:45:52.382925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.588 [2024-09-29 21:45:52.382961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:33.588 [2024-09-29 21:45:52.382994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.588 [2024-09-29 21:45:52.384906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.588 [2024-09-29 21:45:52.384983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:33.588 [2024-09-29 21:45:52.385098] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:33.588 [2024-09-29 21:45:52.385183] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.588 [2024-09-29 21:45:52.385350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.588 [2024-09-29 21:45:52.385494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:33.588 spare 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.588 [2024-09-29 21:45:52.485415] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:33.588 [2024-09-29 21:45:52.485487] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:33.588 [2024-09-29 21:45:52.485735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:33.588 [2024-09-29 21:45:52.485929] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:33.588 [2024-09-29 21:45:52.485970] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:33.588 [2024-09-29 21:45:52.486172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.588 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.589 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.589 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.589 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.589 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.589 "name": "raid_bdev1", 00:14:33.589 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:33.589 "strip_size_kb": 0, 00:14:33.589 "state": "online", 00:14:33.589 "raid_level": "raid1", 00:14:33.589 "superblock": true, 00:14:33.589 "num_base_bdevs": 4, 00:14:33.589 "num_base_bdevs_discovered": 3, 00:14:33.589 "num_base_bdevs_operational": 3, 00:14:33.589 "base_bdevs_list": [ 00:14:33.589 { 00:14:33.589 "name": "spare", 00:14:33.589 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:33.589 "is_configured": true, 00:14:33.589 "data_offset": 2048, 00:14:33.589 "data_size": 63488 00:14:33.589 }, 00:14:33.589 { 00:14:33.589 "name": null, 00:14:33.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.589 "is_configured": false, 00:14:33.589 "data_offset": 2048, 00:14:33.589 "data_size": 63488 00:14:33.589 }, 00:14:33.589 { 00:14:33.589 "name": "BaseBdev3", 00:14:33.589 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:33.589 "is_configured": true, 00:14:33.589 "data_offset": 2048, 00:14:33.589 "data_size": 63488 00:14:33.589 }, 00:14:33.589 { 00:14:33.589 "name": "BaseBdev4", 00:14:33.589 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:33.589 "is_configured": true, 00:14:33.589 "data_offset": 2048, 00:14:33.589 "data_size": 63488 00:14:33.589 } 00:14:33.589 ] 00:14:33.589 }' 00:14:33.589 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.589 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.157 "name": "raid_bdev1", 00:14:34.157 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:34.157 "strip_size_kb": 0, 00:14:34.157 "state": "online", 00:14:34.157 "raid_level": "raid1", 00:14:34.157 "superblock": true, 00:14:34.157 "num_base_bdevs": 4, 00:14:34.157 "num_base_bdevs_discovered": 3, 00:14:34.157 "num_base_bdevs_operational": 3, 00:14:34.157 "base_bdevs_list": [ 00:14:34.157 { 00:14:34.157 "name": "spare", 00:14:34.157 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:34.157 "is_configured": true, 00:14:34.157 "data_offset": 2048, 00:14:34.157 "data_size": 63488 00:14:34.157 }, 00:14:34.157 { 00:14:34.157 "name": null, 00:14:34.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.157 "is_configured": false, 00:14:34.157 "data_offset": 2048, 00:14:34.157 "data_size": 63488 00:14:34.157 }, 00:14:34.157 { 00:14:34.157 "name": "BaseBdev3", 00:14:34.157 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:34.157 "is_configured": true, 00:14:34.157 "data_offset": 2048, 00:14:34.157 "data_size": 63488 00:14:34.157 }, 00:14:34.157 { 00:14:34.157 "name": "BaseBdev4", 00:14:34.157 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:34.157 "is_configured": true, 00:14:34.157 "data_offset": 2048, 00:14:34.157 "data_size": 63488 00:14:34.157 } 00:14:34.157 ] 00:14:34.157 }' 00:14:34.157 21:45:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.157 [2024-09-29 21:45:53.117623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.157 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.417 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.417 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.417 "name": "raid_bdev1", 00:14:34.417 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:34.417 "strip_size_kb": 0, 00:14:34.417 "state": "online", 00:14:34.417 "raid_level": "raid1", 00:14:34.417 "superblock": true, 00:14:34.417 "num_base_bdevs": 4, 00:14:34.417 "num_base_bdevs_discovered": 2, 00:14:34.417 "num_base_bdevs_operational": 2, 00:14:34.417 "base_bdevs_list": [ 00:14:34.417 { 00:14:34.417 "name": null, 00:14:34.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.417 "is_configured": false, 00:14:34.417 "data_offset": 0, 00:14:34.417 "data_size": 63488 00:14:34.417 }, 00:14:34.417 { 00:14:34.417 "name": null, 00:14:34.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.417 "is_configured": false, 00:14:34.417 "data_offset": 2048, 00:14:34.417 "data_size": 63488 00:14:34.417 }, 00:14:34.417 { 00:14:34.417 "name": "BaseBdev3", 00:14:34.417 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:34.417 "is_configured": true, 00:14:34.417 "data_offset": 2048, 00:14:34.417 "data_size": 63488 00:14:34.417 }, 00:14:34.417 { 00:14:34.417 "name": "BaseBdev4", 00:14:34.417 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:34.417 "is_configured": true, 00:14:34.417 "data_offset": 2048, 00:14:34.417 "data_size": 63488 00:14:34.417 } 00:14:34.417 ] 00:14:34.417 }' 00:14:34.417 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.417 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.677 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.677 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.677 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.677 [2024-09-29 21:45:53.564941] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.677 [2024-09-29 21:45:53.565135] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:34.677 [2024-09-29 21:45:53.565153] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:34.677 [2024-09-29 21:45:53.565181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.677 [2024-09-29 21:45:53.577474] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:34.677 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.677 21:45:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:34.677 [2024-09-29 21:45:53.579241] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.618 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.618 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.618 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.618 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.618 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.618 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.618 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.618 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.618 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.879 "name": "raid_bdev1", 00:14:35.879 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:35.879 "strip_size_kb": 0, 00:14:35.879 "state": "online", 00:14:35.879 "raid_level": "raid1", 00:14:35.879 "superblock": true, 00:14:35.879 "num_base_bdevs": 4, 00:14:35.879 "num_base_bdevs_discovered": 3, 00:14:35.879 "num_base_bdevs_operational": 3, 00:14:35.879 "process": { 00:14:35.879 "type": "rebuild", 00:14:35.879 "target": "spare", 00:14:35.879 "progress": { 00:14:35.879 "blocks": 20480, 00:14:35.879 "percent": 32 00:14:35.879 } 00:14:35.879 }, 00:14:35.879 "base_bdevs_list": [ 00:14:35.879 { 00:14:35.879 "name": "spare", 00:14:35.879 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:35.879 "is_configured": true, 00:14:35.879 "data_offset": 2048, 00:14:35.879 "data_size": 63488 00:14:35.879 }, 00:14:35.879 { 00:14:35.879 "name": null, 00:14:35.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.879 "is_configured": false, 00:14:35.879 "data_offset": 2048, 00:14:35.879 "data_size": 63488 00:14:35.879 }, 00:14:35.879 { 00:14:35.879 "name": "BaseBdev3", 00:14:35.879 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:35.879 "is_configured": true, 00:14:35.879 "data_offset": 2048, 00:14:35.879 "data_size": 63488 00:14:35.879 }, 00:14:35.879 { 00:14:35.879 "name": "BaseBdev4", 00:14:35.879 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:35.879 "is_configured": true, 00:14:35.879 "data_offset": 2048, 00:14:35.879 "data_size": 63488 00:14:35.879 } 00:14:35.879 ] 00:14:35.879 }' 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.879 [2024-09-29 21:45:54.740066] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.879 [2024-09-29 21:45:54.783869] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:35.879 [2024-09-29 21:45:54.783927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.879 [2024-09-29 21:45:54.783942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:35.879 [2024-09-29 21:45:54.783951] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.879 "name": "raid_bdev1", 00:14:35.879 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:35.879 "strip_size_kb": 0, 00:14:35.879 "state": "online", 00:14:35.879 "raid_level": "raid1", 00:14:35.879 "superblock": true, 00:14:35.879 "num_base_bdevs": 4, 00:14:35.879 "num_base_bdevs_discovered": 2, 00:14:35.879 "num_base_bdevs_operational": 2, 00:14:35.879 "base_bdevs_list": [ 00:14:35.879 { 00:14:35.879 "name": null, 00:14:35.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.879 "is_configured": false, 00:14:35.879 "data_offset": 0, 00:14:35.879 "data_size": 63488 00:14:35.879 }, 00:14:35.879 { 00:14:35.879 "name": null, 00:14:35.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.879 "is_configured": false, 00:14:35.879 "data_offset": 2048, 00:14:35.879 "data_size": 63488 00:14:35.879 }, 00:14:35.879 { 00:14:35.879 "name": "BaseBdev3", 00:14:35.879 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:35.879 "is_configured": true, 00:14:35.879 "data_offset": 2048, 00:14:35.879 "data_size": 63488 00:14:35.879 }, 00:14:35.879 { 00:14:35.879 "name": "BaseBdev4", 00:14:35.879 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:35.879 "is_configured": true, 00:14:35.879 "data_offset": 2048, 00:14:35.879 "data_size": 63488 00:14:35.879 } 00:14:35.879 ] 00:14:35.879 }' 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.879 21:45:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.449 21:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:36.449 21:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.449 21:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.449 [2024-09-29 21:45:55.274147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:36.449 [2024-09-29 21:45:55.274254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.449 [2024-09-29 21:45:55.274294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:36.449 [2024-09-29 21:45:55.274324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.449 [2024-09-29 21:45:55.274769] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.449 [2024-09-29 21:45:55.274830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:36.449 [2024-09-29 21:45:55.274932] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:36.449 [2024-09-29 21:45:55.274972] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:36.449 [2024-09-29 21:45:55.275009] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:36.449 [2024-09-29 21:45:55.275094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.449 [2024-09-29 21:45:55.288252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:36.449 spare 00:14:36.449 21:45:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.449 [2024-09-29 21:45:55.290022] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.449 21:45:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.389 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.389 "name": "raid_bdev1", 00:14:37.389 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:37.389 "strip_size_kb": 0, 00:14:37.389 "state": "online", 00:14:37.389 "raid_level": "raid1", 00:14:37.389 "superblock": true, 00:14:37.389 "num_base_bdevs": 4, 00:14:37.389 "num_base_bdevs_discovered": 3, 00:14:37.389 "num_base_bdevs_operational": 3, 00:14:37.389 "process": { 00:14:37.389 "type": "rebuild", 00:14:37.389 "target": "spare", 00:14:37.389 "progress": { 00:14:37.389 "blocks": 20480, 00:14:37.389 "percent": 32 00:14:37.389 } 00:14:37.389 }, 00:14:37.389 "base_bdevs_list": [ 00:14:37.389 { 00:14:37.389 "name": "spare", 00:14:37.389 "uuid": "306ddd53-cd73-57b0-9c2a-6398f62f5f94", 00:14:37.389 "is_configured": true, 00:14:37.389 "data_offset": 2048, 00:14:37.389 "data_size": 63488 00:14:37.389 }, 00:14:37.389 { 00:14:37.389 "name": null, 00:14:37.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.389 "is_configured": false, 00:14:37.389 "data_offset": 2048, 00:14:37.389 "data_size": 63488 00:14:37.389 }, 00:14:37.389 { 00:14:37.389 "name": "BaseBdev3", 00:14:37.389 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:37.389 "is_configured": true, 00:14:37.389 "data_offset": 2048, 00:14:37.389 "data_size": 63488 00:14:37.389 }, 00:14:37.389 { 00:14:37.390 "name": "BaseBdev4", 00:14:37.390 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:37.390 "is_configured": true, 00:14:37.390 "data_offset": 2048, 00:14:37.390 "data_size": 63488 00:14:37.390 } 00:14:37.390 ] 00:14:37.390 }' 00:14:37.390 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.650 [2024-09-29 21:45:56.450278] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.650 [2024-09-29 21:45:56.494623] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.650 [2024-09-29 21:45:56.494677] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.650 [2024-09-29 21:45:56.494693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.650 [2024-09-29 21:45:56.494700] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.650 "name": "raid_bdev1", 00:14:37.650 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:37.650 "strip_size_kb": 0, 00:14:37.650 "state": "online", 00:14:37.650 "raid_level": "raid1", 00:14:37.650 "superblock": true, 00:14:37.650 "num_base_bdevs": 4, 00:14:37.650 "num_base_bdevs_discovered": 2, 00:14:37.650 "num_base_bdevs_operational": 2, 00:14:37.650 "base_bdevs_list": [ 00:14:37.650 { 00:14:37.650 "name": null, 00:14:37.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.650 "is_configured": false, 00:14:37.650 "data_offset": 0, 00:14:37.650 "data_size": 63488 00:14:37.650 }, 00:14:37.650 { 00:14:37.650 "name": null, 00:14:37.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.650 "is_configured": false, 00:14:37.650 "data_offset": 2048, 00:14:37.650 "data_size": 63488 00:14:37.650 }, 00:14:37.650 { 00:14:37.650 "name": "BaseBdev3", 00:14:37.650 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:37.650 "is_configured": true, 00:14:37.650 "data_offset": 2048, 00:14:37.650 "data_size": 63488 00:14:37.650 }, 00:14:37.650 { 00:14:37.650 "name": "BaseBdev4", 00:14:37.650 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:37.650 "is_configured": true, 00:14:37.650 "data_offset": 2048, 00:14:37.650 "data_size": 63488 00:14:37.650 } 00:14:37.650 ] 00:14:37.650 }' 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.650 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.220 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.220 "name": "raid_bdev1", 00:14:38.220 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:38.220 "strip_size_kb": 0, 00:14:38.220 "state": "online", 00:14:38.220 "raid_level": "raid1", 00:14:38.220 "superblock": true, 00:14:38.220 "num_base_bdevs": 4, 00:14:38.220 "num_base_bdevs_discovered": 2, 00:14:38.220 "num_base_bdevs_operational": 2, 00:14:38.220 "base_bdevs_list": [ 00:14:38.220 { 00:14:38.220 "name": null, 00:14:38.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.220 "is_configured": false, 00:14:38.220 "data_offset": 0, 00:14:38.220 "data_size": 63488 00:14:38.220 }, 00:14:38.220 { 00:14:38.220 "name": null, 00:14:38.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.220 "is_configured": false, 00:14:38.220 "data_offset": 2048, 00:14:38.220 "data_size": 63488 00:14:38.220 }, 00:14:38.220 { 00:14:38.220 "name": "BaseBdev3", 00:14:38.220 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:38.220 "is_configured": true, 00:14:38.220 "data_offset": 2048, 00:14:38.220 "data_size": 63488 00:14:38.220 }, 00:14:38.220 { 00:14:38.220 "name": "BaseBdev4", 00:14:38.220 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:38.220 "is_configured": true, 00:14:38.220 "data_offset": 2048, 00:14:38.220 "data_size": 63488 00:14:38.220 } 00:14:38.221 ] 00:14:38.221 }' 00:14:38.221 21:45:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.221 [2024-09-29 21:45:57.116325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:38.221 [2024-09-29 21:45:57.116419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.221 [2024-09-29 21:45:57.116444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:38.221 [2024-09-29 21:45:57.116453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.221 [2024-09-29 21:45:57.116866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.221 [2024-09-29 21:45:57.116883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:38.221 [2024-09-29 21:45:57.116956] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:38.221 [2024-09-29 21:45:57.116969] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:38.221 [2024-09-29 21:45:57.116980] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:38.221 [2024-09-29 21:45:57.116990] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:38.221 BaseBdev1 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.221 21:45:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.161 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.421 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.421 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.421 "name": "raid_bdev1", 00:14:39.421 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:39.421 "strip_size_kb": 0, 00:14:39.421 "state": "online", 00:14:39.421 "raid_level": "raid1", 00:14:39.421 "superblock": true, 00:14:39.421 "num_base_bdevs": 4, 00:14:39.421 "num_base_bdevs_discovered": 2, 00:14:39.421 "num_base_bdevs_operational": 2, 00:14:39.421 "base_bdevs_list": [ 00:14:39.421 { 00:14:39.421 "name": null, 00:14:39.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.421 "is_configured": false, 00:14:39.421 "data_offset": 0, 00:14:39.421 "data_size": 63488 00:14:39.421 }, 00:14:39.421 { 00:14:39.421 "name": null, 00:14:39.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.421 "is_configured": false, 00:14:39.421 "data_offset": 2048, 00:14:39.421 "data_size": 63488 00:14:39.421 }, 00:14:39.421 { 00:14:39.421 "name": "BaseBdev3", 00:14:39.421 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:39.421 "is_configured": true, 00:14:39.421 "data_offset": 2048, 00:14:39.421 "data_size": 63488 00:14:39.421 }, 00:14:39.421 { 00:14:39.421 "name": "BaseBdev4", 00:14:39.421 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:39.421 "is_configured": true, 00:14:39.421 "data_offset": 2048, 00:14:39.421 "data_size": 63488 00:14:39.421 } 00:14:39.421 ] 00:14:39.421 }' 00:14:39.421 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.421 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.681 "name": "raid_bdev1", 00:14:39.681 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:39.681 "strip_size_kb": 0, 00:14:39.681 "state": "online", 00:14:39.681 "raid_level": "raid1", 00:14:39.681 "superblock": true, 00:14:39.681 "num_base_bdevs": 4, 00:14:39.681 "num_base_bdevs_discovered": 2, 00:14:39.681 "num_base_bdevs_operational": 2, 00:14:39.681 "base_bdevs_list": [ 00:14:39.681 { 00:14:39.681 "name": null, 00:14:39.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.681 "is_configured": false, 00:14:39.681 "data_offset": 0, 00:14:39.681 "data_size": 63488 00:14:39.681 }, 00:14:39.681 { 00:14:39.681 "name": null, 00:14:39.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.681 "is_configured": false, 00:14:39.681 "data_offset": 2048, 00:14:39.681 "data_size": 63488 00:14:39.681 }, 00:14:39.681 { 00:14:39.681 "name": "BaseBdev3", 00:14:39.681 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:39.681 "is_configured": true, 00:14:39.681 "data_offset": 2048, 00:14:39.681 "data_size": 63488 00:14:39.681 }, 00:14:39.681 { 00:14:39.681 "name": "BaseBdev4", 00:14:39.681 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:39.681 "is_configured": true, 00:14:39.681 "data_offset": 2048, 00:14:39.681 "data_size": 63488 00:14:39.681 } 00:14:39.681 ] 00:14:39.681 }' 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.681 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.941 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.941 [2024-09-29 21:45:58.705802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.941 [2024-09-29 21:45:58.705961] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:39.941 [2024-09-29 21:45:58.705977] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:39.941 request: 00:14:39.941 { 00:14:39.941 "base_bdev": "BaseBdev1", 00:14:39.941 "raid_bdev": "raid_bdev1", 00:14:39.941 "method": "bdev_raid_add_base_bdev", 00:14:39.941 "req_id": 1 00:14:39.941 } 00:14:39.941 Got JSON-RPC error response 00:14:39.941 response: 00:14:39.941 { 00:14:39.942 "code": -22, 00:14:39.942 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:39.942 } 00:14:39.942 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:39.942 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:39.942 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.942 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.942 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.942 21:45:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:40.878 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.878 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.878 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.879 "name": "raid_bdev1", 00:14:40.879 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:40.879 "strip_size_kb": 0, 00:14:40.879 "state": "online", 00:14:40.879 "raid_level": "raid1", 00:14:40.879 "superblock": true, 00:14:40.879 "num_base_bdevs": 4, 00:14:40.879 "num_base_bdevs_discovered": 2, 00:14:40.879 "num_base_bdevs_operational": 2, 00:14:40.879 "base_bdevs_list": [ 00:14:40.879 { 00:14:40.879 "name": null, 00:14:40.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.879 "is_configured": false, 00:14:40.879 "data_offset": 0, 00:14:40.879 "data_size": 63488 00:14:40.879 }, 00:14:40.879 { 00:14:40.879 "name": null, 00:14:40.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.879 "is_configured": false, 00:14:40.879 "data_offset": 2048, 00:14:40.879 "data_size": 63488 00:14:40.879 }, 00:14:40.879 { 00:14:40.879 "name": "BaseBdev3", 00:14:40.879 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:40.879 "is_configured": true, 00:14:40.879 "data_offset": 2048, 00:14:40.879 "data_size": 63488 00:14:40.879 }, 00:14:40.879 { 00:14:40.879 "name": "BaseBdev4", 00:14:40.879 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:40.879 "is_configured": true, 00:14:40.879 "data_offset": 2048, 00:14:40.879 "data_size": 63488 00:14:40.879 } 00:14:40.879 ] 00:14:40.879 }' 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.879 21:45:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.448 "name": "raid_bdev1", 00:14:41.448 "uuid": "27ba6365-e148-4120-bea5-a54b1769f0c8", 00:14:41.448 "strip_size_kb": 0, 00:14:41.448 "state": "online", 00:14:41.448 "raid_level": "raid1", 00:14:41.448 "superblock": true, 00:14:41.448 "num_base_bdevs": 4, 00:14:41.448 "num_base_bdevs_discovered": 2, 00:14:41.448 "num_base_bdevs_operational": 2, 00:14:41.448 "base_bdevs_list": [ 00:14:41.448 { 00:14:41.448 "name": null, 00:14:41.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.448 "is_configured": false, 00:14:41.448 "data_offset": 0, 00:14:41.448 "data_size": 63488 00:14:41.448 }, 00:14:41.448 { 00:14:41.448 "name": null, 00:14:41.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.448 "is_configured": false, 00:14:41.448 "data_offset": 2048, 00:14:41.448 "data_size": 63488 00:14:41.448 }, 00:14:41.448 { 00:14:41.448 "name": "BaseBdev3", 00:14:41.448 "uuid": "46d9a71e-a263-5955-b484-4194a037755a", 00:14:41.448 "is_configured": true, 00:14:41.448 "data_offset": 2048, 00:14:41.448 "data_size": 63488 00:14:41.448 }, 00:14:41.448 { 00:14:41.448 "name": "BaseBdev4", 00:14:41.448 "uuid": "04461010-9010-58fd-b27b-d66e41325227", 00:14:41.448 "is_configured": true, 00:14:41.448 "data_offset": 2048, 00:14:41.448 "data_size": 63488 00:14:41.448 } 00:14:41.448 ] 00:14:41.448 }' 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79212 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79212 ']' 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79212 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79212 00:14:41.448 killing process with pid 79212 00:14:41.448 Received shutdown signal, test time was about 18.140172 seconds 00:14:41.448 00:14:41.448 Latency(us) 00:14:41.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.448 =================================================================================================================== 00:14:41.448 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79212' 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79212 00:14:41.448 [2024-09-29 21:46:00.386622] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.448 [2024-09-29 21:46:00.386751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.448 21:46:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79212 00:14:41.449 [2024-09-29 21:46:00.386816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.449 [2024-09-29 21:46:00.386828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:42.017 [2024-09-29 21:46:00.777705] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.397 21:46:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:43.397 00:14:43.397 real 0m21.637s 00:14:43.397 user 0m28.213s 00:14:43.397 sys 0m2.678s 00:14:43.397 ************************************ 00:14:43.397 END TEST raid_rebuild_test_sb_io 00:14:43.397 ************************************ 00:14:43.397 21:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:43.397 21:46:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.397 21:46:02 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:43.397 21:46:02 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:43.397 21:46:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:43.397 21:46:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:43.397 21:46:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.397 ************************************ 00:14:43.397 START TEST raid5f_state_function_test 00:14:43.397 ************************************ 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79937 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79937' 00:14:43.397 Process raid pid: 79937 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79937 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 79937 ']' 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.397 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.398 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.398 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.398 [2024-09-29 21:46:02.203145] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:43.398 [2024-09-29 21:46:02.203266] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.398 [2024-09-29 21:46:02.372755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.657 [2024-09-29 21:46:02.568726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.917 [2024-09-29 21:46:02.755116] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.917 [2024-09-29 21:46:02.755151] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.177 [2024-09-29 21:46:03.025366] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.177 [2024-09-29 21:46:03.025422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.177 [2024-09-29 21:46:03.025432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.177 [2024-09-29 21:46:03.025440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.177 [2024-09-29 21:46:03.025446] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.177 [2024-09-29 21:46:03.025455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.177 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.178 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.178 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.178 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.178 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.178 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.178 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.178 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.178 "name": "Existed_Raid", 00:14:44.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.178 "strip_size_kb": 64, 00:14:44.178 "state": "configuring", 00:14:44.178 "raid_level": "raid5f", 00:14:44.178 "superblock": false, 00:14:44.178 "num_base_bdevs": 3, 00:14:44.178 "num_base_bdevs_discovered": 0, 00:14:44.178 "num_base_bdevs_operational": 3, 00:14:44.178 "base_bdevs_list": [ 00:14:44.178 { 00:14:44.178 "name": "BaseBdev1", 00:14:44.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.178 "is_configured": false, 00:14:44.178 "data_offset": 0, 00:14:44.178 "data_size": 0 00:14:44.178 }, 00:14:44.178 { 00:14:44.178 "name": "BaseBdev2", 00:14:44.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.178 "is_configured": false, 00:14:44.178 "data_offset": 0, 00:14:44.178 "data_size": 0 00:14:44.178 }, 00:14:44.178 { 00:14:44.178 "name": "BaseBdev3", 00:14:44.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.178 "is_configured": false, 00:14:44.178 "data_offset": 0, 00:14:44.178 "data_size": 0 00:14:44.178 } 00:14:44.178 ] 00:14:44.178 }' 00:14:44.178 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.178 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.747 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:44.747 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.747 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.747 [2024-09-29 21:46:03.500470] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.747 [2024-09-29 21:46:03.500574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:44.747 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.747 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:44.747 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.748 [2024-09-29 21:46:03.508484] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.748 [2024-09-29 21:46:03.508567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.748 [2024-09-29 21:46:03.508592] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.748 [2024-09-29 21:46:03.508612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.748 [2024-09-29 21:46:03.508628] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:44.748 [2024-09-29 21:46:03.508647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.748 [2024-09-29 21:46:03.581506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.748 BaseBdev1 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.748 [ 00:14:44.748 { 00:14:44.748 "name": "BaseBdev1", 00:14:44.748 "aliases": [ 00:14:44.748 "5e135b5c-30f2-496c-9644-c4f18099d471" 00:14:44.748 ], 00:14:44.748 "product_name": "Malloc disk", 00:14:44.748 "block_size": 512, 00:14:44.748 "num_blocks": 65536, 00:14:44.748 "uuid": "5e135b5c-30f2-496c-9644-c4f18099d471", 00:14:44.748 "assigned_rate_limits": { 00:14:44.748 "rw_ios_per_sec": 0, 00:14:44.748 "rw_mbytes_per_sec": 0, 00:14:44.748 "r_mbytes_per_sec": 0, 00:14:44.748 "w_mbytes_per_sec": 0 00:14:44.748 }, 00:14:44.748 "claimed": true, 00:14:44.748 "claim_type": "exclusive_write", 00:14:44.748 "zoned": false, 00:14:44.748 "supported_io_types": { 00:14:44.748 "read": true, 00:14:44.748 "write": true, 00:14:44.748 "unmap": true, 00:14:44.748 "flush": true, 00:14:44.748 "reset": true, 00:14:44.748 "nvme_admin": false, 00:14:44.748 "nvme_io": false, 00:14:44.748 "nvme_io_md": false, 00:14:44.748 "write_zeroes": true, 00:14:44.748 "zcopy": true, 00:14:44.748 "get_zone_info": false, 00:14:44.748 "zone_management": false, 00:14:44.748 "zone_append": false, 00:14:44.748 "compare": false, 00:14:44.748 "compare_and_write": false, 00:14:44.748 "abort": true, 00:14:44.748 "seek_hole": false, 00:14:44.748 "seek_data": false, 00:14:44.748 "copy": true, 00:14:44.748 "nvme_iov_md": false 00:14:44.748 }, 00:14:44.748 "memory_domains": [ 00:14:44.748 { 00:14:44.748 "dma_device_id": "system", 00:14:44.748 "dma_device_type": 1 00:14:44.748 }, 00:14:44.748 { 00:14:44.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.748 "dma_device_type": 2 00:14:44.748 } 00:14:44.748 ], 00:14:44.748 "driver_specific": {} 00:14:44.748 } 00:14:44.748 ] 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.748 "name": "Existed_Raid", 00:14:44.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.748 "strip_size_kb": 64, 00:14:44.748 "state": "configuring", 00:14:44.748 "raid_level": "raid5f", 00:14:44.748 "superblock": false, 00:14:44.748 "num_base_bdevs": 3, 00:14:44.748 "num_base_bdevs_discovered": 1, 00:14:44.748 "num_base_bdevs_operational": 3, 00:14:44.748 "base_bdevs_list": [ 00:14:44.748 { 00:14:44.748 "name": "BaseBdev1", 00:14:44.748 "uuid": "5e135b5c-30f2-496c-9644-c4f18099d471", 00:14:44.748 "is_configured": true, 00:14:44.748 "data_offset": 0, 00:14:44.748 "data_size": 65536 00:14:44.748 }, 00:14:44.748 { 00:14:44.748 "name": "BaseBdev2", 00:14:44.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.748 "is_configured": false, 00:14:44.748 "data_offset": 0, 00:14:44.748 "data_size": 0 00:14:44.748 }, 00:14:44.748 { 00:14:44.748 "name": "BaseBdev3", 00:14:44.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.748 "is_configured": false, 00:14:44.748 "data_offset": 0, 00:14:44.748 "data_size": 0 00:14:44.748 } 00:14:44.748 ] 00:14:44.748 }' 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.748 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.318 [2024-09-29 21:46:04.012778] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.318 [2024-09-29 21:46:04.012866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.318 [2024-09-29 21:46:04.024795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.318 [2024-09-29 21:46:04.026490] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.318 [2024-09-29 21:46:04.026562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.318 [2024-09-29 21:46:04.026589] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:45.318 [2024-09-29 21:46:04.026610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.318 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.319 "name": "Existed_Raid", 00:14:45.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.319 "strip_size_kb": 64, 00:14:45.319 "state": "configuring", 00:14:45.319 "raid_level": "raid5f", 00:14:45.319 "superblock": false, 00:14:45.319 "num_base_bdevs": 3, 00:14:45.319 "num_base_bdevs_discovered": 1, 00:14:45.319 "num_base_bdevs_operational": 3, 00:14:45.319 "base_bdevs_list": [ 00:14:45.319 { 00:14:45.319 "name": "BaseBdev1", 00:14:45.319 "uuid": "5e135b5c-30f2-496c-9644-c4f18099d471", 00:14:45.319 "is_configured": true, 00:14:45.319 "data_offset": 0, 00:14:45.319 "data_size": 65536 00:14:45.319 }, 00:14:45.319 { 00:14:45.319 "name": "BaseBdev2", 00:14:45.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.319 "is_configured": false, 00:14:45.319 "data_offset": 0, 00:14:45.319 "data_size": 0 00:14:45.319 }, 00:14:45.319 { 00:14:45.319 "name": "BaseBdev3", 00:14:45.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.319 "is_configured": false, 00:14:45.319 "data_offset": 0, 00:14:45.319 "data_size": 0 00:14:45.319 } 00:14:45.319 ] 00:14:45.319 }' 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.319 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.579 [2024-09-29 21:46:04.444624] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:45.579 BaseBdev2 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.579 [ 00:14:45.579 { 00:14:45.579 "name": "BaseBdev2", 00:14:45.579 "aliases": [ 00:14:45.579 "331bbc65-5ba9-495d-b66d-fb1ea2c5d9d4" 00:14:45.579 ], 00:14:45.579 "product_name": "Malloc disk", 00:14:45.579 "block_size": 512, 00:14:45.579 "num_blocks": 65536, 00:14:45.579 "uuid": "331bbc65-5ba9-495d-b66d-fb1ea2c5d9d4", 00:14:45.579 "assigned_rate_limits": { 00:14:45.579 "rw_ios_per_sec": 0, 00:14:45.579 "rw_mbytes_per_sec": 0, 00:14:45.579 "r_mbytes_per_sec": 0, 00:14:45.579 "w_mbytes_per_sec": 0 00:14:45.579 }, 00:14:45.579 "claimed": true, 00:14:45.579 "claim_type": "exclusive_write", 00:14:45.579 "zoned": false, 00:14:45.579 "supported_io_types": { 00:14:45.579 "read": true, 00:14:45.579 "write": true, 00:14:45.579 "unmap": true, 00:14:45.579 "flush": true, 00:14:45.579 "reset": true, 00:14:45.579 "nvme_admin": false, 00:14:45.579 "nvme_io": false, 00:14:45.579 "nvme_io_md": false, 00:14:45.579 "write_zeroes": true, 00:14:45.579 "zcopy": true, 00:14:45.579 "get_zone_info": false, 00:14:45.579 "zone_management": false, 00:14:45.579 "zone_append": false, 00:14:45.579 "compare": false, 00:14:45.579 "compare_and_write": false, 00:14:45.579 "abort": true, 00:14:45.579 "seek_hole": false, 00:14:45.579 "seek_data": false, 00:14:45.579 "copy": true, 00:14:45.579 "nvme_iov_md": false 00:14:45.579 }, 00:14:45.579 "memory_domains": [ 00:14:45.579 { 00:14:45.579 "dma_device_id": "system", 00:14:45.579 "dma_device_type": 1 00:14:45.579 }, 00:14:45.579 { 00:14:45.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.579 "dma_device_type": 2 00:14:45.579 } 00:14:45.579 ], 00:14:45.579 "driver_specific": {} 00:14:45.579 } 00:14:45.579 ] 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.579 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.579 "name": "Existed_Raid", 00:14:45.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.579 "strip_size_kb": 64, 00:14:45.579 "state": "configuring", 00:14:45.579 "raid_level": "raid5f", 00:14:45.579 "superblock": false, 00:14:45.579 "num_base_bdevs": 3, 00:14:45.579 "num_base_bdevs_discovered": 2, 00:14:45.579 "num_base_bdevs_operational": 3, 00:14:45.579 "base_bdevs_list": [ 00:14:45.579 { 00:14:45.579 "name": "BaseBdev1", 00:14:45.579 "uuid": "5e135b5c-30f2-496c-9644-c4f18099d471", 00:14:45.579 "is_configured": true, 00:14:45.579 "data_offset": 0, 00:14:45.579 "data_size": 65536 00:14:45.579 }, 00:14:45.579 { 00:14:45.579 "name": "BaseBdev2", 00:14:45.579 "uuid": "331bbc65-5ba9-495d-b66d-fb1ea2c5d9d4", 00:14:45.579 "is_configured": true, 00:14:45.579 "data_offset": 0, 00:14:45.579 "data_size": 65536 00:14:45.579 }, 00:14:45.580 { 00:14:45.580 "name": "BaseBdev3", 00:14:45.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.580 "is_configured": false, 00:14:45.580 "data_offset": 0, 00:14:45.580 "data_size": 0 00:14:45.580 } 00:14:45.580 ] 00:14:45.580 }' 00:14:45.580 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.580 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.150 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:46.150 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.150 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.150 [2024-09-29 21:46:05.000000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.150 [2024-09-29 21:46:05.000097] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:46.150 [2024-09-29 21:46:05.000118] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:46.150 [2024-09-29 21:46:05.000383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:46.150 [2024-09-29 21:46:05.005444] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:46.150 [2024-09-29 21:46:05.005464] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:46.150 [2024-09-29 21:46:05.005716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.150 BaseBdev3 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.150 [ 00:14:46.150 { 00:14:46.150 "name": "BaseBdev3", 00:14:46.150 "aliases": [ 00:14:46.150 "44fbe6bf-38ea-4f3f-9cd1-051e956908cf" 00:14:46.150 ], 00:14:46.150 "product_name": "Malloc disk", 00:14:46.150 "block_size": 512, 00:14:46.150 "num_blocks": 65536, 00:14:46.150 "uuid": "44fbe6bf-38ea-4f3f-9cd1-051e956908cf", 00:14:46.150 "assigned_rate_limits": { 00:14:46.150 "rw_ios_per_sec": 0, 00:14:46.150 "rw_mbytes_per_sec": 0, 00:14:46.150 "r_mbytes_per_sec": 0, 00:14:46.150 "w_mbytes_per_sec": 0 00:14:46.150 }, 00:14:46.150 "claimed": true, 00:14:46.150 "claim_type": "exclusive_write", 00:14:46.150 "zoned": false, 00:14:46.150 "supported_io_types": { 00:14:46.150 "read": true, 00:14:46.150 "write": true, 00:14:46.150 "unmap": true, 00:14:46.150 "flush": true, 00:14:46.150 "reset": true, 00:14:46.150 "nvme_admin": false, 00:14:46.150 "nvme_io": false, 00:14:46.150 "nvme_io_md": false, 00:14:46.150 "write_zeroes": true, 00:14:46.150 "zcopy": true, 00:14:46.150 "get_zone_info": false, 00:14:46.150 "zone_management": false, 00:14:46.150 "zone_append": false, 00:14:46.150 "compare": false, 00:14:46.150 "compare_and_write": false, 00:14:46.150 "abort": true, 00:14:46.150 "seek_hole": false, 00:14:46.150 "seek_data": false, 00:14:46.150 "copy": true, 00:14:46.150 "nvme_iov_md": false 00:14:46.150 }, 00:14:46.150 "memory_domains": [ 00:14:46.150 { 00:14:46.150 "dma_device_id": "system", 00:14:46.150 "dma_device_type": 1 00:14:46.150 }, 00:14:46.150 { 00:14:46.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.150 "dma_device_type": 2 00:14:46.150 } 00:14:46.150 ], 00:14:46.150 "driver_specific": {} 00:14:46.150 } 00:14:46.150 ] 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.150 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.151 "name": "Existed_Raid", 00:14:46.151 "uuid": "b16c88c5-bbe0-484f-b692-ad27631647f6", 00:14:46.151 "strip_size_kb": 64, 00:14:46.151 "state": "online", 00:14:46.151 "raid_level": "raid5f", 00:14:46.151 "superblock": false, 00:14:46.151 "num_base_bdevs": 3, 00:14:46.151 "num_base_bdevs_discovered": 3, 00:14:46.151 "num_base_bdevs_operational": 3, 00:14:46.151 "base_bdevs_list": [ 00:14:46.151 { 00:14:46.151 "name": "BaseBdev1", 00:14:46.151 "uuid": "5e135b5c-30f2-496c-9644-c4f18099d471", 00:14:46.151 "is_configured": true, 00:14:46.151 "data_offset": 0, 00:14:46.151 "data_size": 65536 00:14:46.151 }, 00:14:46.151 { 00:14:46.151 "name": "BaseBdev2", 00:14:46.151 "uuid": "331bbc65-5ba9-495d-b66d-fb1ea2c5d9d4", 00:14:46.151 "is_configured": true, 00:14:46.151 "data_offset": 0, 00:14:46.151 "data_size": 65536 00:14:46.151 }, 00:14:46.151 { 00:14:46.151 "name": "BaseBdev3", 00:14:46.151 "uuid": "44fbe6bf-38ea-4f3f-9cd1-051e956908cf", 00:14:46.151 "is_configured": true, 00:14:46.151 "data_offset": 0, 00:14:46.151 "data_size": 65536 00:14:46.151 } 00:14:46.151 ] 00:14:46.151 }' 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.151 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.720 [2024-09-29 21:46:05.502672] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:46.720 "name": "Existed_Raid", 00:14:46.720 "aliases": [ 00:14:46.720 "b16c88c5-bbe0-484f-b692-ad27631647f6" 00:14:46.720 ], 00:14:46.720 "product_name": "Raid Volume", 00:14:46.720 "block_size": 512, 00:14:46.720 "num_blocks": 131072, 00:14:46.720 "uuid": "b16c88c5-bbe0-484f-b692-ad27631647f6", 00:14:46.720 "assigned_rate_limits": { 00:14:46.720 "rw_ios_per_sec": 0, 00:14:46.720 "rw_mbytes_per_sec": 0, 00:14:46.720 "r_mbytes_per_sec": 0, 00:14:46.720 "w_mbytes_per_sec": 0 00:14:46.720 }, 00:14:46.720 "claimed": false, 00:14:46.720 "zoned": false, 00:14:46.720 "supported_io_types": { 00:14:46.720 "read": true, 00:14:46.720 "write": true, 00:14:46.720 "unmap": false, 00:14:46.720 "flush": false, 00:14:46.720 "reset": true, 00:14:46.720 "nvme_admin": false, 00:14:46.720 "nvme_io": false, 00:14:46.720 "nvme_io_md": false, 00:14:46.720 "write_zeroes": true, 00:14:46.720 "zcopy": false, 00:14:46.720 "get_zone_info": false, 00:14:46.720 "zone_management": false, 00:14:46.720 "zone_append": false, 00:14:46.720 "compare": false, 00:14:46.720 "compare_and_write": false, 00:14:46.720 "abort": false, 00:14:46.720 "seek_hole": false, 00:14:46.720 "seek_data": false, 00:14:46.720 "copy": false, 00:14:46.720 "nvme_iov_md": false 00:14:46.720 }, 00:14:46.720 "driver_specific": { 00:14:46.720 "raid": { 00:14:46.720 "uuid": "b16c88c5-bbe0-484f-b692-ad27631647f6", 00:14:46.720 "strip_size_kb": 64, 00:14:46.720 "state": "online", 00:14:46.720 "raid_level": "raid5f", 00:14:46.720 "superblock": false, 00:14:46.720 "num_base_bdevs": 3, 00:14:46.720 "num_base_bdevs_discovered": 3, 00:14:46.720 "num_base_bdevs_operational": 3, 00:14:46.720 "base_bdevs_list": [ 00:14:46.720 { 00:14:46.720 "name": "BaseBdev1", 00:14:46.720 "uuid": "5e135b5c-30f2-496c-9644-c4f18099d471", 00:14:46.720 "is_configured": true, 00:14:46.720 "data_offset": 0, 00:14:46.720 "data_size": 65536 00:14:46.720 }, 00:14:46.720 { 00:14:46.720 "name": "BaseBdev2", 00:14:46.720 "uuid": "331bbc65-5ba9-495d-b66d-fb1ea2c5d9d4", 00:14:46.720 "is_configured": true, 00:14:46.720 "data_offset": 0, 00:14:46.720 "data_size": 65536 00:14:46.720 }, 00:14:46.720 { 00:14:46.720 "name": "BaseBdev3", 00:14:46.720 "uuid": "44fbe6bf-38ea-4f3f-9cd1-051e956908cf", 00:14:46.720 "is_configured": true, 00:14:46.720 "data_offset": 0, 00:14:46.720 "data_size": 65536 00:14:46.720 } 00:14:46.720 ] 00:14:46.720 } 00:14:46.720 } 00:14:46.720 }' 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:46.720 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:46.720 BaseBdev2 00:14:46.720 BaseBdev3' 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.721 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.981 [2024-09-29 21:46:05.766099] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.981 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.981 "name": "Existed_Raid", 00:14:46.981 "uuid": "b16c88c5-bbe0-484f-b692-ad27631647f6", 00:14:46.981 "strip_size_kb": 64, 00:14:46.981 "state": "online", 00:14:46.981 "raid_level": "raid5f", 00:14:46.981 "superblock": false, 00:14:46.981 "num_base_bdevs": 3, 00:14:46.981 "num_base_bdevs_discovered": 2, 00:14:46.981 "num_base_bdevs_operational": 2, 00:14:46.981 "base_bdevs_list": [ 00:14:46.981 { 00:14:46.981 "name": null, 00:14:46.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.981 "is_configured": false, 00:14:46.981 "data_offset": 0, 00:14:46.981 "data_size": 65536 00:14:46.981 }, 00:14:46.981 { 00:14:46.981 "name": "BaseBdev2", 00:14:46.981 "uuid": "331bbc65-5ba9-495d-b66d-fb1ea2c5d9d4", 00:14:46.981 "is_configured": true, 00:14:46.981 "data_offset": 0, 00:14:46.981 "data_size": 65536 00:14:46.981 }, 00:14:46.981 { 00:14:46.981 "name": "BaseBdev3", 00:14:46.981 "uuid": "44fbe6bf-38ea-4f3f-9cd1-051e956908cf", 00:14:46.981 "is_configured": true, 00:14:46.982 "data_offset": 0, 00:14:46.982 "data_size": 65536 00:14:46.982 } 00:14:46.982 ] 00:14:46.982 }' 00:14:46.982 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.982 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 [2024-09-29 21:46:06.366204] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.552 [2024-09-29 21:46:06.366375] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.552 [2024-09-29 21:46:06.453716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.552 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.552 [2024-09-29 21:46:06.513606] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:47.552 [2024-09-29 21:46:06.513660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.812 BaseBdev2 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.812 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.812 [ 00:14:47.812 { 00:14:47.812 "name": "BaseBdev2", 00:14:47.812 "aliases": [ 00:14:47.812 "be89adda-c924-4045-8390-b1e3fcea3d2b" 00:14:47.812 ], 00:14:47.812 "product_name": "Malloc disk", 00:14:47.812 "block_size": 512, 00:14:47.812 "num_blocks": 65536, 00:14:47.812 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:47.812 "assigned_rate_limits": { 00:14:47.812 "rw_ios_per_sec": 0, 00:14:47.812 "rw_mbytes_per_sec": 0, 00:14:47.812 "r_mbytes_per_sec": 0, 00:14:47.812 "w_mbytes_per_sec": 0 00:14:47.812 }, 00:14:47.812 "claimed": false, 00:14:47.812 "zoned": false, 00:14:47.812 "supported_io_types": { 00:14:47.812 "read": true, 00:14:47.812 "write": true, 00:14:47.812 "unmap": true, 00:14:47.812 "flush": true, 00:14:47.812 "reset": true, 00:14:47.812 "nvme_admin": false, 00:14:47.812 "nvme_io": false, 00:14:47.812 "nvme_io_md": false, 00:14:47.812 "write_zeroes": true, 00:14:47.812 "zcopy": true, 00:14:47.812 "get_zone_info": false, 00:14:47.812 "zone_management": false, 00:14:47.812 "zone_append": false, 00:14:47.812 "compare": false, 00:14:47.812 "compare_and_write": false, 00:14:47.812 "abort": true, 00:14:47.813 "seek_hole": false, 00:14:47.813 "seek_data": false, 00:14:47.813 "copy": true, 00:14:47.813 "nvme_iov_md": false 00:14:47.813 }, 00:14:47.813 "memory_domains": [ 00:14:47.813 { 00:14:47.813 "dma_device_id": "system", 00:14:47.813 "dma_device_type": 1 00:14:47.813 }, 00:14:47.813 { 00:14:47.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.813 "dma_device_type": 2 00:14:47.813 } 00:14:47.813 ], 00:14:47.813 "driver_specific": {} 00:14:47.813 } 00:14:47.813 ] 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.813 BaseBdev3 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.813 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.073 [ 00:14:48.073 { 00:14:48.073 "name": "BaseBdev3", 00:14:48.073 "aliases": [ 00:14:48.073 "0c709990-46f7-4572-ac58-ec08b678b688" 00:14:48.073 ], 00:14:48.073 "product_name": "Malloc disk", 00:14:48.073 "block_size": 512, 00:14:48.073 "num_blocks": 65536, 00:14:48.073 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:48.073 "assigned_rate_limits": { 00:14:48.073 "rw_ios_per_sec": 0, 00:14:48.073 "rw_mbytes_per_sec": 0, 00:14:48.073 "r_mbytes_per_sec": 0, 00:14:48.073 "w_mbytes_per_sec": 0 00:14:48.073 }, 00:14:48.073 "claimed": false, 00:14:48.073 "zoned": false, 00:14:48.073 "supported_io_types": { 00:14:48.073 "read": true, 00:14:48.073 "write": true, 00:14:48.073 "unmap": true, 00:14:48.073 "flush": true, 00:14:48.073 "reset": true, 00:14:48.073 "nvme_admin": false, 00:14:48.073 "nvme_io": false, 00:14:48.073 "nvme_io_md": false, 00:14:48.073 "write_zeroes": true, 00:14:48.073 "zcopy": true, 00:14:48.073 "get_zone_info": false, 00:14:48.073 "zone_management": false, 00:14:48.073 "zone_append": false, 00:14:48.073 "compare": false, 00:14:48.073 "compare_and_write": false, 00:14:48.073 "abort": true, 00:14:48.073 "seek_hole": false, 00:14:48.073 "seek_data": false, 00:14:48.073 "copy": true, 00:14:48.073 "nvme_iov_md": false 00:14:48.073 }, 00:14:48.073 "memory_domains": [ 00:14:48.073 { 00:14:48.073 "dma_device_id": "system", 00:14:48.073 "dma_device_type": 1 00:14:48.073 }, 00:14:48.073 { 00:14:48.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.073 "dma_device_type": 2 00:14:48.073 } 00:14:48.073 ], 00:14:48.073 "driver_specific": {} 00:14:48.073 } 00:14:48.073 ] 00:14:48.073 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.073 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:48.073 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:48.073 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:48.073 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:48.073 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.074 [2024-09-29 21:46:06.815092] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.074 [2024-09-29 21:46:06.815214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.074 [2024-09-29 21:46:06.815251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.074 [2024-09-29 21:46:06.816943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.074 "name": "Existed_Raid", 00:14:48.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.074 "strip_size_kb": 64, 00:14:48.074 "state": "configuring", 00:14:48.074 "raid_level": "raid5f", 00:14:48.074 "superblock": false, 00:14:48.074 "num_base_bdevs": 3, 00:14:48.074 "num_base_bdevs_discovered": 2, 00:14:48.074 "num_base_bdevs_operational": 3, 00:14:48.074 "base_bdevs_list": [ 00:14:48.074 { 00:14:48.074 "name": "BaseBdev1", 00:14:48.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.074 "is_configured": false, 00:14:48.074 "data_offset": 0, 00:14:48.074 "data_size": 0 00:14:48.074 }, 00:14:48.074 { 00:14:48.074 "name": "BaseBdev2", 00:14:48.074 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:48.074 "is_configured": true, 00:14:48.074 "data_offset": 0, 00:14:48.074 "data_size": 65536 00:14:48.074 }, 00:14:48.074 { 00:14:48.074 "name": "BaseBdev3", 00:14:48.074 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:48.074 "is_configured": true, 00:14:48.074 "data_offset": 0, 00:14:48.074 "data_size": 65536 00:14:48.074 } 00:14:48.074 ] 00:14:48.074 }' 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.074 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.334 [2024-09-29 21:46:07.262259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.334 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.594 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.594 "name": "Existed_Raid", 00:14:48.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.594 "strip_size_kb": 64, 00:14:48.594 "state": "configuring", 00:14:48.594 "raid_level": "raid5f", 00:14:48.594 "superblock": false, 00:14:48.594 "num_base_bdevs": 3, 00:14:48.594 "num_base_bdevs_discovered": 1, 00:14:48.594 "num_base_bdevs_operational": 3, 00:14:48.594 "base_bdevs_list": [ 00:14:48.594 { 00:14:48.594 "name": "BaseBdev1", 00:14:48.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.594 "is_configured": false, 00:14:48.594 "data_offset": 0, 00:14:48.594 "data_size": 0 00:14:48.594 }, 00:14:48.594 { 00:14:48.594 "name": null, 00:14:48.594 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:48.594 "is_configured": false, 00:14:48.594 "data_offset": 0, 00:14:48.594 "data_size": 65536 00:14:48.594 }, 00:14:48.594 { 00:14:48.594 "name": "BaseBdev3", 00:14:48.594 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:48.594 "is_configured": true, 00:14:48.594 "data_offset": 0, 00:14:48.594 "data_size": 65536 00:14:48.594 } 00:14:48.594 ] 00:14:48.594 }' 00:14:48.594 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.594 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.854 [2024-09-29 21:46:07.781982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.854 BaseBdev1 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.854 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.854 [ 00:14:48.854 { 00:14:48.854 "name": "BaseBdev1", 00:14:48.854 "aliases": [ 00:14:48.854 "f89b81d2-596b-4676-8aee-84a7b37747bb" 00:14:48.854 ], 00:14:48.854 "product_name": "Malloc disk", 00:14:48.854 "block_size": 512, 00:14:48.854 "num_blocks": 65536, 00:14:48.854 "uuid": "f89b81d2-596b-4676-8aee-84a7b37747bb", 00:14:48.854 "assigned_rate_limits": { 00:14:48.854 "rw_ios_per_sec": 0, 00:14:48.854 "rw_mbytes_per_sec": 0, 00:14:48.854 "r_mbytes_per_sec": 0, 00:14:48.854 "w_mbytes_per_sec": 0 00:14:48.854 }, 00:14:48.854 "claimed": true, 00:14:48.854 "claim_type": "exclusive_write", 00:14:48.854 "zoned": false, 00:14:48.854 "supported_io_types": { 00:14:48.854 "read": true, 00:14:48.854 "write": true, 00:14:48.854 "unmap": true, 00:14:48.854 "flush": true, 00:14:48.854 "reset": true, 00:14:48.854 "nvme_admin": false, 00:14:48.854 "nvme_io": false, 00:14:48.855 "nvme_io_md": false, 00:14:48.855 "write_zeroes": true, 00:14:48.855 "zcopy": true, 00:14:48.855 "get_zone_info": false, 00:14:48.855 "zone_management": false, 00:14:48.855 "zone_append": false, 00:14:48.855 "compare": false, 00:14:48.855 "compare_and_write": false, 00:14:48.855 "abort": true, 00:14:48.855 "seek_hole": false, 00:14:48.855 "seek_data": false, 00:14:48.855 "copy": true, 00:14:48.855 "nvme_iov_md": false 00:14:48.855 }, 00:14:48.855 "memory_domains": [ 00:14:48.855 { 00:14:48.855 "dma_device_id": "system", 00:14:48.855 "dma_device_type": 1 00:14:48.855 }, 00:14:48.855 { 00:14:48.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.855 "dma_device_type": 2 00:14:48.855 } 00:14:48.855 ], 00:14:48.855 "driver_specific": {} 00:14:48.855 } 00:14:48.855 ] 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.855 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.114 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.114 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.114 "name": "Existed_Raid", 00:14:49.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.114 "strip_size_kb": 64, 00:14:49.114 "state": "configuring", 00:14:49.114 "raid_level": "raid5f", 00:14:49.114 "superblock": false, 00:14:49.114 "num_base_bdevs": 3, 00:14:49.114 "num_base_bdevs_discovered": 2, 00:14:49.114 "num_base_bdevs_operational": 3, 00:14:49.114 "base_bdevs_list": [ 00:14:49.114 { 00:14:49.114 "name": "BaseBdev1", 00:14:49.114 "uuid": "f89b81d2-596b-4676-8aee-84a7b37747bb", 00:14:49.114 "is_configured": true, 00:14:49.114 "data_offset": 0, 00:14:49.114 "data_size": 65536 00:14:49.114 }, 00:14:49.114 { 00:14:49.114 "name": null, 00:14:49.114 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:49.114 "is_configured": false, 00:14:49.114 "data_offset": 0, 00:14:49.114 "data_size": 65536 00:14:49.114 }, 00:14:49.114 { 00:14:49.114 "name": "BaseBdev3", 00:14:49.114 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:49.114 "is_configured": true, 00:14:49.114 "data_offset": 0, 00:14:49.114 "data_size": 65536 00:14:49.114 } 00:14:49.114 ] 00:14:49.114 }' 00:14:49.114 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.114 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.374 [2024-09-29 21:46:08.313136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.374 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.634 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.634 "name": "Existed_Raid", 00:14:49.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.634 "strip_size_kb": 64, 00:14:49.634 "state": "configuring", 00:14:49.634 "raid_level": "raid5f", 00:14:49.634 "superblock": false, 00:14:49.634 "num_base_bdevs": 3, 00:14:49.634 "num_base_bdevs_discovered": 1, 00:14:49.634 "num_base_bdevs_operational": 3, 00:14:49.634 "base_bdevs_list": [ 00:14:49.634 { 00:14:49.634 "name": "BaseBdev1", 00:14:49.634 "uuid": "f89b81d2-596b-4676-8aee-84a7b37747bb", 00:14:49.634 "is_configured": true, 00:14:49.634 "data_offset": 0, 00:14:49.634 "data_size": 65536 00:14:49.634 }, 00:14:49.634 { 00:14:49.634 "name": null, 00:14:49.634 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:49.634 "is_configured": false, 00:14:49.634 "data_offset": 0, 00:14:49.634 "data_size": 65536 00:14:49.634 }, 00:14:49.634 { 00:14:49.634 "name": null, 00:14:49.634 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:49.634 "is_configured": false, 00:14:49.634 "data_offset": 0, 00:14:49.634 "data_size": 65536 00:14:49.634 } 00:14:49.634 ] 00:14:49.634 }' 00:14:49.634 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.634 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.895 [2024-09-29 21:46:08.776307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.895 "name": "Existed_Raid", 00:14:49.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.895 "strip_size_kb": 64, 00:14:49.895 "state": "configuring", 00:14:49.895 "raid_level": "raid5f", 00:14:49.895 "superblock": false, 00:14:49.895 "num_base_bdevs": 3, 00:14:49.895 "num_base_bdevs_discovered": 2, 00:14:49.895 "num_base_bdevs_operational": 3, 00:14:49.895 "base_bdevs_list": [ 00:14:49.895 { 00:14:49.895 "name": "BaseBdev1", 00:14:49.895 "uuid": "f89b81d2-596b-4676-8aee-84a7b37747bb", 00:14:49.895 "is_configured": true, 00:14:49.895 "data_offset": 0, 00:14:49.895 "data_size": 65536 00:14:49.895 }, 00:14:49.895 { 00:14:49.895 "name": null, 00:14:49.895 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:49.895 "is_configured": false, 00:14:49.895 "data_offset": 0, 00:14:49.895 "data_size": 65536 00:14:49.895 }, 00:14:49.895 { 00:14:49.895 "name": "BaseBdev3", 00:14:49.895 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:49.895 "is_configured": true, 00:14:49.895 "data_offset": 0, 00:14:49.895 "data_size": 65536 00:14:49.895 } 00:14:49.895 ] 00:14:49.895 }' 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.895 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.464 [2024-09-29 21:46:09.299699] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.464 "name": "Existed_Raid", 00:14:50.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.464 "strip_size_kb": 64, 00:14:50.464 "state": "configuring", 00:14:50.464 "raid_level": "raid5f", 00:14:50.464 "superblock": false, 00:14:50.464 "num_base_bdevs": 3, 00:14:50.464 "num_base_bdevs_discovered": 1, 00:14:50.464 "num_base_bdevs_operational": 3, 00:14:50.464 "base_bdevs_list": [ 00:14:50.464 { 00:14:50.464 "name": null, 00:14:50.464 "uuid": "f89b81d2-596b-4676-8aee-84a7b37747bb", 00:14:50.464 "is_configured": false, 00:14:50.464 "data_offset": 0, 00:14:50.464 "data_size": 65536 00:14:50.464 }, 00:14:50.464 { 00:14:50.464 "name": null, 00:14:50.464 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:50.464 "is_configured": false, 00:14:50.464 "data_offset": 0, 00:14:50.464 "data_size": 65536 00:14:50.464 }, 00:14:50.464 { 00:14:50.464 "name": "BaseBdev3", 00:14:50.464 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:50.464 "is_configured": true, 00:14:50.464 "data_offset": 0, 00:14:50.464 "data_size": 65536 00:14:50.464 } 00:14:50.464 ] 00:14:50.464 }' 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.464 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.036 [2024-09-29 21:46:09.871021] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.036 "name": "Existed_Raid", 00:14:51.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.036 "strip_size_kb": 64, 00:14:51.036 "state": "configuring", 00:14:51.036 "raid_level": "raid5f", 00:14:51.036 "superblock": false, 00:14:51.036 "num_base_bdevs": 3, 00:14:51.036 "num_base_bdevs_discovered": 2, 00:14:51.036 "num_base_bdevs_operational": 3, 00:14:51.036 "base_bdevs_list": [ 00:14:51.036 { 00:14:51.036 "name": null, 00:14:51.036 "uuid": "f89b81d2-596b-4676-8aee-84a7b37747bb", 00:14:51.036 "is_configured": false, 00:14:51.036 "data_offset": 0, 00:14:51.036 "data_size": 65536 00:14:51.036 }, 00:14:51.036 { 00:14:51.036 "name": "BaseBdev2", 00:14:51.036 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:51.036 "is_configured": true, 00:14:51.036 "data_offset": 0, 00:14:51.036 "data_size": 65536 00:14:51.036 }, 00:14:51.036 { 00:14:51.036 "name": "BaseBdev3", 00:14:51.036 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:51.036 "is_configured": true, 00:14:51.036 "data_offset": 0, 00:14:51.036 "data_size": 65536 00:14:51.036 } 00:14:51.036 ] 00:14:51.036 }' 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.036 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f89b81d2-596b-4676-8aee-84a7b37747bb 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.606 [2024-09-29 21:46:10.476715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:51.606 [2024-09-29 21:46:10.476819] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:51.606 [2024-09-29 21:46:10.476846] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:51.606 [2024-09-29 21:46:10.477128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:51.606 [2024-09-29 21:46:10.482338] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:51.606 [2024-09-29 21:46:10.482391] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:51.606 [2024-09-29 21:46:10.482658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.606 NewBaseBdev 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:51.606 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.607 [ 00:14:51.607 { 00:14:51.607 "name": "NewBaseBdev", 00:14:51.607 "aliases": [ 00:14:51.607 "f89b81d2-596b-4676-8aee-84a7b37747bb" 00:14:51.607 ], 00:14:51.607 "product_name": "Malloc disk", 00:14:51.607 "block_size": 512, 00:14:51.607 "num_blocks": 65536, 00:14:51.607 "uuid": "f89b81d2-596b-4676-8aee-84a7b37747bb", 00:14:51.607 "assigned_rate_limits": { 00:14:51.607 "rw_ios_per_sec": 0, 00:14:51.607 "rw_mbytes_per_sec": 0, 00:14:51.607 "r_mbytes_per_sec": 0, 00:14:51.607 "w_mbytes_per_sec": 0 00:14:51.607 }, 00:14:51.607 "claimed": true, 00:14:51.607 "claim_type": "exclusive_write", 00:14:51.607 "zoned": false, 00:14:51.607 "supported_io_types": { 00:14:51.607 "read": true, 00:14:51.607 "write": true, 00:14:51.607 "unmap": true, 00:14:51.607 "flush": true, 00:14:51.607 "reset": true, 00:14:51.607 "nvme_admin": false, 00:14:51.607 "nvme_io": false, 00:14:51.607 "nvme_io_md": false, 00:14:51.607 "write_zeroes": true, 00:14:51.607 "zcopy": true, 00:14:51.607 "get_zone_info": false, 00:14:51.607 "zone_management": false, 00:14:51.607 "zone_append": false, 00:14:51.607 "compare": false, 00:14:51.607 "compare_and_write": false, 00:14:51.607 "abort": true, 00:14:51.607 "seek_hole": false, 00:14:51.607 "seek_data": false, 00:14:51.607 "copy": true, 00:14:51.607 "nvme_iov_md": false 00:14:51.607 }, 00:14:51.607 "memory_domains": [ 00:14:51.607 { 00:14:51.607 "dma_device_id": "system", 00:14:51.607 "dma_device_type": 1 00:14:51.607 }, 00:14:51.607 { 00:14:51.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.607 "dma_device_type": 2 00:14:51.607 } 00:14:51.607 ], 00:14:51.607 "driver_specific": {} 00:14:51.607 } 00:14:51.607 ] 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.607 "name": "Existed_Raid", 00:14:51.607 "uuid": "0f113069-bf06-40f3-a1ff-94890b184c36", 00:14:51.607 "strip_size_kb": 64, 00:14:51.607 "state": "online", 00:14:51.607 "raid_level": "raid5f", 00:14:51.607 "superblock": false, 00:14:51.607 "num_base_bdevs": 3, 00:14:51.607 "num_base_bdevs_discovered": 3, 00:14:51.607 "num_base_bdevs_operational": 3, 00:14:51.607 "base_bdevs_list": [ 00:14:51.607 { 00:14:51.607 "name": "NewBaseBdev", 00:14:51.607 "uuid": "f89b81d2-596b-4676-8aee-84a7b37747bb", 00:14:51.607 "is_configured": true, 00:14:51.607 "data_offset": 0, 00:14:51.607 "data_size": 65536 00:14:51.607 }, 00:14:51.607 { 00:14:51.607 "name": "BaseBdev2", 00:14:51.607 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:51.607 "is_configured": true, 00:14:51.607 "data_offset": 0, 00:14:51.607 "data_size": 65536 00:14:51.607 }, 00:14:51.607 { 00:14:51.607 "name": "BaseBdev3", 00:14:51.607 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:51.607 "is_configured": true, 00:14:51.607 "data_offset": 0, 00:14:51.607 "data_size": 65536 00:14:51.607 } 00:14:51.607 ] 00:14:51.607 }' 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.607 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.176 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:52.176 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:52.176 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.176 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.176 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.176 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.176 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:52.176 21:46:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.176 21:46:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.176 [2024-09-29 21:46:11.007989] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.176 "name": "Existed_Raid", 00:14:52.176 "aliases": [ 00:14:52.176 "0f113069-bf06-40f3-a1ff-94890b184c36" 00:14:52.176 ], 00:14:52.176 "product_name": "Raid Volume", 00:14:52.176 "block_size": 512, 00:14:52.176 "num_blocks": 131072, 00:14:52.176 "uuid": "0f113069-bf06-40f3-a1ff-94890b184c36", 00:14:52.176 "assigned_rate_limits": { 00:14:52.176 "rw_ios_per_sec": 0, 00:14:52.176 "rw_mbytes_per_sec": 0, 00:14:52.176 "r_mbytes_per_sec": 0, 00:14:52.176 "w_mbytes_per_sec": 0 00:14:52.176 }, 00:14:52.176 "claimed": false, 00:14:52.176 "zoned": false, 00:14:52.176 "supported_io_types": { 00:14:52.176 "read": true, 00:14:52.176 "write": true, 00:14:52.176 "unmap": false, 00:14:52.176 "flush": false, 00:14:52.176 "reset": true, 00:14:52.176 "nvme_admin": false, 00:14:52.176 "nvme_io": false, 00:14:52.176 "nvme_io_md": false, 00:14:52.176 "write_zeroes": true, 00:14:52.176 "zcopy": false, 00:14:52.176 "get_zone_info": false, 00:14:52.176 "zone_management": false, 00:14:52.176 "zone_append": false, 00:14:52.176 "compare": false, 00:14:52.176 "compare_and_write": false, 00:14:52.176 "abort": false, 00:14:52.176 "seek_hole": false, 00:14:52.176 "seek_data": false, 00:14:52.176 "copy": false, 00:14:52.176 "nvme_iov_md": false 00:14:52.176 }, 00:14:52.176 "driver_specific": { 00:14:52.176 "raid": { 00:14:52.176 "uuid": "0f113069-bf06-40f3-a1ff-94890b184c36", 00:14:52.176 "strip_size_kb": 64, 00:14:52.176 "state": "online", 00:14:52.176 "raid_level": "raid5f", 00:14:52.176 "superblock": false, 00:14:52.176 "num_base_bdevs": 3, 00:14:52.176 "num_base_bdevs_discovered": 3, 00:14:52.176 "num_base_bdevs_operational": 3, 00:14:52.176 "base_bdevs_list": [ 00:14:52.176 { 00:14:52.176 "name": "NewBaseBdev", 00:14:52.176 "uuid": "f89b81d2-596b-4676-8aee-84a7b37747bb", 00:14:52.176 "is_configured": true, 00:14:52.176 "data_offset": 0, 00:14:52.176 "data_size": 65536 00:14:52.176 }, 00:14:52.176 { 00:14:52.176 "name": "BaseBdev2", 00:14:52.176 "uuid": "be89adda-c924-4045-8390-b1e3fcea3d2b", 00:14:52.176 "is_configured": true, 00:14:52.176 "data_offset": 0, 00:14:52.176 "data_size": 65536 00:14:52.176 }, 00:14:52.176 { 00:14:52.176 "name": "BaseBdev3", 00:14:52.176 "uuid": "0c709990-46f7-4572-ac58-ec08b678b688", 00:14:52.176 "is_configured": true, 00:14:52.176 "data_offset": 0, 00:14:52.176 "data_size": 65536 00:14:52.176 } 00:14:52.176 ] 00:14:52.176 } 00:14:52.176 } 00:14:52.176 }' 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:52.176 BaseBdev2 00:14:52.176 BaseBdev3' 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.176 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.177 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.440 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.440 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.440 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.440 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:52.440 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.440 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.440 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.441 [2024-09-29 21:46:11.275346] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.441 [2024-09-29 21:46:11.275370] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.441 [2024-09-29 21:46:11.275435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.441 [2024-09-29 21:46:11.275682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.441 [2024-09-29 21:46:11.275694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79937 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 79937 ']' 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 79937 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79937 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79937' 00:14:52.441 killing process with pid 79937 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 79937 00:14:52.441 [2024-09-29 21:46:11.327121] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.441 21:46:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 79937 00:14:52.726 [2024-09-29 21:46:11.604219] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.149 ************************************ 00:14:54.149 END TEST raid5f_state_function_test 00:14:54.149 ************************************ 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:54.149 00:14:54.149 real 0m10.697s 00:14:54.149 user 0m16.919s 00:14:54.149 sys 0m1.990s 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.149 21:46:12 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:54.149 21:46:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:54.149 21:46:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:54.149 21:46:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.149 ************************************ 00:14:54.149 START TEST raid5f_state_function_test_sb 00:14:54.149 ************************************ 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80558 00:14:54.149 Process raid pid: 80558 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80558' 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80558 00:14:54.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80558 ']' 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.149 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.149 [2024-09-29 21:46:12.985894] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:54.149 [2024-09-29 21:46:12.986015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.409 [2024-09-29 21:46:13.154784] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.409 [2024-09-29 21:46:13.351256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.669 [2024-09-29 21:46:13.549459] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.669 [2024-09-29 21:46:13.549493] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.929 [2024-09-29 21:46:13.773213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.929 [2024-09-29 21:46:13.773268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.929 [2024-09-29 21:46:13.773278] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.929 [2024-09-29 21:46:13.773287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.929 [2024-09-29 21:46:13.773293] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.929 [2024-09-29 21:46:13.773301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.929 "name": "Existed_Raid", 00:14:54.929 "uuid": "e824ef5e-57bd-4039-b953-94c272039946", 00:14:54.929 "strip_size_kb": 64, 00:14:54.929 "state": "configuring", 00:14:54.929 "raid_level": "raid5f", 00:14:54.929 "superblock": true, 00:14:54.929 "num_base_bdevs": 3, 00:14:54.929 "num_base_bdevs_discovered": 0, 00:14:54.929 "num_base_bdevs_operational": 3, 00:14:54.929 "base_bdevs_list": [ 00:14:54.929 { 00:14:54.929 "name": "BaseBdev1", 00:14:54.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.929 "is_configured": false, 00:14:54.929 "data_offset": 0, 00:14:54.929 "data_size": 0 00:14:54.929 }, 00:14:54.929 { 00:14:54.929 "name": "BaseBdev2", 00:14:54.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.929 "is_configured": false, 00:14:54.929 "data_offset": 0, 00:14:54.929 "data_size": 0 00:14:54.929 }, 00:14:54.929 { 00:14:54.929 "name": "BaseBdev3", 00:14:54.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.929 "is_configured": false, 00:14:54.929 "data_offset": 0, 00:14:54.929 "data_size": 0 00:14:54.929 } 00:14:54.929 ] 00:14:54.929 }' 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.929 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:55.499 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.499 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 [2024-09-29 21:46:14.208382] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:55.499 [2024-09-29 21:46:14.208477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:55.499 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.499 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:55.499 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.499 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 [2024-09-29 21:46:14.216402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.499 [2024-09-29 21:46:14.216484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.499 [2024-09-29 21:46:14.216509] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.499 [2024-09-29 21:46:14.216530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.499 [2024-09-29 21:46:14.216546] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.499 [2024-09-29 21:46:14.216564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.499 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.500 [2024-09-29 21:46:14.290339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.500 BaseBdev1 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.500 [ 00:14:55.500 { 00:14:55.500 "name": "BaseBdev1", 00:14:55.500 "aliases": [ 00:14:55.500 "feabeb6b-28d5-4f6c-904c-d34150c41ba8" 00:14:55.500 ], 00:14:55.500 "product_name": "Malloc disk", 00:14:55.500 "block_size": 512, 00:14:55.500 "num_blocks": 65536, 00:14:55.500 "uuid": "feabeb6b-28d5-4f6c-904c-d34150c41ba8", 00:14:55.500 "assigned_rate_limits": { 00:14:55.500 "rw_ios_per_sec": 0, 00:14:55.500 "rw_mbytes_per_sec": 0, 00:14:55.500 "r_mbytes_per_sec": 0, 00:14:55.500 "w_mbytes_per_sec": 0 00:14:55.500 }, 00:14:55.500 "claimed": true, 00:14:55.500 "claim_type": "exclusive_write", 00:14:55.500 "zoned": false, 00:14:55.500 "supported_io_types": { 00:14:55.500 "read": true, 00:14:55.500 "write": true, 00:14:55.500 "unmap": true, 00:14:55.500 "flush": true, 00:14:55.500 "reset": true, 00:14:55.500 "nvme_admin": false, 00:14:55.500 "nvme_io": false, 00:14:55.500 "nvme_io_md": false, 00:14:55.500 "write_zeroes": true, 00:14:55.500 "zcopy": true, 00:14:55.500 "get_zone_info": false, 00:14:55.500 "zone_management": false, 00:14:55.500 "zone_append": false, 00:14:55.500 "compare": false, 00:14:55.500 "compare_and_write": false, 00:14:55.500 "abort": true, 00:14:55.500 "seek_hole": false, 00:14:55.500 "seek_data": false, 00:14:55.500 "copy": true, 00:14:55.500 "nvme_iov_md": false 00:14:55.500 }, 00:14:55.500 "memory_domains": [ 00:14:55.500 { 00:14:55.500 "dma_device_id": "system", 00:14:55.500 "dma_device_type": 1 00:14:55.500 }, 00:14:55.500 { 00:14:55.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.500 "dma_device_type": 2 00:14:55.500 } 00:14:55.500 ], 00:14:55.500 "driver_specific": {} 00:14:55.500 } 00:14:55.500 ] 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.500 "name": "Existed_Raid", 00:14:55.500 "uuid": "6edcab0e-5d57-447a-9718-69a72d93a127", 00:14:55.500 "strip_size_kb": 64, 00:14:55.500 "state": "configuring", 00:14:55.500 "raid_level": "raid5f", 00:14:55.500 "superblock": true, 00:14:55.500 "num_base_bdevs": 3, 00:14:55.500 "num_base_bdevs_discovered": 1, 00:14:55.500 "num_base_bdevs_operational": 3, 00:14:55.500 "base_bdevs_list": [ 00:14:55.500 { 00:14:55.500 "name": "BaseBdev1", 00:14:55.500 "uuid": "feabeb6b-28d5-4f6c-904c-d34150c41ba8", 00:14:55.500 "is_configured": true, 00:14:55.500 "data_offset": 2048, 00:14:55.500 "data_size": 63488 00:14:55.500 }, 00:14:55.500 { 00:14:55.500 "name": "BaseBdev2", 00:14:55.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.500 "is_configured": false, 00:14:55.500 "data_offset": 0, 00:14:55.500 "data_size": 0 00:14:55.500 }, 00:14:55.500 { 00:14:55.500 "name": "BaseBdev3", 00:14:55.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.500 "is_configured": false, 00:14:55.500 "data_offset": 0, 00:14:55.500 "data_size": 0 00:14:55.500 } 00:14:55.500 ] 00:14:55.500 }' 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.500 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.068 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.068 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.069 [2024-09-29 21:46:14.773540] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.069 [2024-09-29 21:46:14.773628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.069 [2024-09-29 21:46:14.781578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.069 [2024-09-29 21:46:14.783256] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.069 [2024-09-29 21:46:14.783318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.069 [2024-09-29 21:46:14.783357] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.069 [2024-09-29 21:46:14.783379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.069 "name": "Existed_Raid", 00:14:56.069 "uuid": "76bc5071-8630-4e40-9cf3-399e877956d2", 00:14:56.069 "strip_size_kb": 64, 00:14:56.069 "state": "configuring", 00:14:56.069 "raid_level": "raid5f", 00:14:56.069 "superblock": true, 00:14:56.069 "num_base_bdevs": 3, 00:14:56.069 "num_base_bdevs_discovered": 1, 00:14:56.069 "num_base_bdevs_operational": 3, 00:14:56.069 "base_bdevs_list": [ 00:14:56.069 { 00:14:56.069 "name": "BaseBdev1", 00:14:56.069 "uuid": "feabeb6b-28d5-4f6c-904c-d34150c41ba8", 00:14:56.069 "is_configured": true, 00:14:56.069 "data_offset": 2048, 00:14:56.069 "data_size": 63488 00:14:56.069 }, 00:14:56.069 { 00:14:56.069 "name": "BaseBdev2", 00:14:56.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.069 "is_configured": false, 00:14:56.069 "data_offset": 0, 00:14:56.069 "data_size": 0 00:14:56.069 }, 00:14:56.069 { 00:14:56.069 "name": "BaseBdev3", 00:14:56.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.069 "is_configured": false, 00:14:56.069 "data_offset": 0, 00:14:56.069 "data_size": 0 00:14:56.069 } 00:14:56.069 ] 00:14:56.069 }' 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.069 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.328 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.328 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.328 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.588 [2024-09-29 21:46:15.311908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.588 BaseBdev2 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.588 [ 00:14:56.588 { 00:14:56.588 "name": "BaseBdev2", 00:14:56.588 "aliases": [ 00:14:56.588 "d56f1394-901e-47aa-83a1-cbf7caa86ffa" 00:14:56.588 ], 00:14:56.588 "product_name": "Malloc disk", 00:14:56.588 "block_size": 512, 00:14:56.588 "num_blocks": 65536, 00:14:56.588 "uuid": "d56f1394-901e-47aa-83a1-cbf7caa86ffa", 00:14:56.588 "assigned_rate_limits": { 00:14:56.588 "rw_ios_per_sec": 0, 00:14:56.588 "rw_mbytes_per_sec": 0, 00:14:56.588 "r_mbytes_per_sec": 0, 00:14:56.588 "w_mbytes_per_sec": 0 00:14:56.588 }, 00:14:56.588 "claimed": true, 00:14:56.588 "claim_type": "exclusive_write", 00:14:56.588 "zoned": false, 00:14:56.588 "supported_io_types": { 00:14:56.588 "read": true, 00:14:56.588 "write": true, 00:14:56.588 "unmap": true, 00:14:56.588 "flush": true, 00:14:56.588 "reset": true, 00:14:56.588 "nvme_admin": false, 00:14:56.588 "nvme_io": false, 00:14:56.588 "nvme_io_md": false, 00:14:56.588 "write_zeroes": true, 00:14:56.588 "zcopy": true, 00:14:56.588 "get_zone_info": false, 00:14:56.588 "zone_management": false, 00:14:56.588 "zone_append": false, 00:14:56.588 "compare": false, 00:14:56.588 "compare_and_write": false, 00:14:56.588 "abort": true, 00:14:56.588 "seek_hole": false, 00:14:56.588 "seek_data": false, 00:14:56.588 "copy": true, 00:14:56.588 "nvme_iov_md": false 00:14:56.588 }, 00:14:56.588 "memory_domains": [ 00:14:56.588 { 00:14:56.588 "dma_device_id": "system", 00:14:56.588 "dma_device_type": 1 00:14:56.588 }, 00:14:56.588 { 00:14:56.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.588 "dma_device_type": 2 00:14:56.588 } 00:14:56.588 ], 00:14:56.588 "driver_specific": {} 00:14:56.588 } 00:14:56.588 ] 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.588 "name": "Existed_Raid", 00:14:56.588 "uuid": "76bc5071-8630-4e40-9cf3-399e877956d2", 00:14:56.588 "strip_size_kb": 64, 00:14:56.588 "state": "configuring", 00:14:56.588 "raid_level": "raid5f", 00:14:56.588 "superblock": true, 00:14:56.588 "num_base_bdevs": 3, 00:14:56.588 "num_base_bdevs_discovered": 2, 00:14:56.588 "num_base_bdevs_operational": 3, 00:14:56.588 "base_bdevs_list": [ 00:14:56.588 { 00:14:56.588 "name": "BaseBdev1", 00:14:56.588 "uuid": "feabeb6b-28d5-4f6c-904c-d34150c41ba8", 00:14:56.588 "is_configured": true, 00:14:56.588 "data_offset": 2048, 00:14:56.588 "data_size": 63488 00:14:56.588 }, 00:14:56.588 { 00:14:56.588 "name": "BaseBdev2", 00:14:56.588 "uuid": "d56f1394-901e-47aa-83a1-cbf7caa86ffa", 00:14:56.588 "is_configured": true, 00:14:56.588 "data_offset": 2048, 00:14:56.588 "data_size": 63488 00:14:56.588 }, 00:14:56.588 { 00:14:56.588 "name": "BaseBdev3", 00:14:56.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.588 "is_configured": false, 00:14:56.588 "data_offset": 0, 00:14:56.588 "data_size": 0 00:14:56.588 } 00:14:56.588 ] 00:14:56.588 }' 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.588 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.848 [2024-09-29 21:46:15.804889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.848 [2024-09-29 21:46:15.805264] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:56.848 [2024-09-29 21:46:15.805329] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:56.848 [2024-09-29 21:46:15.805612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:56.848 BaseBdev3 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.848 [2024-09-29 21:46:15.810971] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:56.848 [2024-09-29 21:46:15.811052] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:56.848 [2024-09-29 21:46:15.811239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.848 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.107 [ 00:14:57.107 { 00:14:57.107 "name": "BaseBdev3", 00:14:57.107 "aliases": [ 00:14:57.107 "c806ae5e-20c9-424a-8af3-071400270c8e" 00:14:57.107 ], 00:14:57.107 "product_name": "Malloc disk", 00:14:57.107 "block_size": 512, 00:14:57.107 "num_blocks": 65536, 00:14:57.107 "uuid": "c806ae5e-20c9-424a-8af3-071400270c8e", 00:14:57.107 "assigned_rate_limits": { 00:14:57.107 "rw_ios_per_sec": 0, 00:14:57.107 "rw_mbytes_per_sec": 0, 00:14:57.107 "r_mbytes_per_sec": 0, 00:14:57.107 "w_mbytes_per_sec": 0 00:14:57.107 }, 00:14:57.107 "claimed": true, 00:14:57.107 "claim_type": "exclusive_write", 00:14:57.107 "zoned": false, 00:14:57.107 "supported_io_types": { 00:14:57.107 "read": true, 00:14:57.107 "write": true, 00:14:57.107 "unmap": true, 00:14:57.107 "flush": true, 00:14:57.107 "reset": true, 00:14:57.107 "nvme_admin": false, 00:14:57.107 "nvme_io": false, 00:14:57.107 "nvme_io_md": false, 00:14:57.107 "write_zeroes": true, 00:14:57.107 "zcopy": true, 00:14:57.107 "get_zone_info": false, 00:14:57.107 "zone_management": false, 00:14:57.107 "zone_append": false, 00:14:57.107 "compare": false, 00:14:57.107 "compare_and_write": false, 00:14:57.107 "abort": true, 00:14:57.107 "seek_hole": false, 00:14:57.107 "seek_data": false, 00:14:57.107 "copy": true, 00:14:57.107 "nvme_iov_md": false 00:14:57.107 }, 00:14:57.107 "memory_domains": [ 00:14:57.107 { 00:14:57.107 "dma_device_id": "system", 00:14:57.107 "dma_device_type": 1 00:14:57.107 }, 00:14:57.107 { 00:14:57.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.107 "dma_device_type": 2 00:14:57.107 } 00:14:57.107 ], 00:14:57.107 "driver_specific": {} 00:14:57.107 } 00:14:57.107 ] 00:14:57.107 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.107 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:57.107 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.107 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.107 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:57.107 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.108 "name": "Existed_Raid", 00:14:57.108 "uuid": "76bc5071-8630-4e40-9cf3-399e877956d2", 00:14:57.108 "strip_size_kb": 64, 00:14:57.108 "state": "online", 00:14:57.108 "raid_level": "raid5f", 00:14:57.108 "superblock": true, 00:14:57.108 "num_base_bdevs": 3, 00:14:57.108 "num_base_bdevs_discovered": 3, 00:14:57.108 "num_base_bdevs_operational": 3, 00:14:57.108 "base_bdevs_list": [ 00:14:57.108 { 00:14:57.108 "name": "BaseBdev1", 00:14:57.108 "uuid": "feabeb6b-28d5-4f6c-904c-d34150c41ba8", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 }, 00:14:57.108 { 00:14:57.108 "name": "BaseBdev2", 00:14:57.108 "uuid": "d56f1394-901e-47aa-83a1-cbf7caa86ffa", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 }, 00:14:57.108 { 00:14:57.108 "name": "BaseBdev3", 00:14:57.108 "uuid": "c806ae5e-20c9-424a-8af3-071400270c8e", 00:14:57.108 "is_configured": true, 00:14:57.108 "data_offset": 2048, 00:14:57.108 "data_size": 63488 00:14:57.108 } 00:14:57.108 ] 00:14:57.108 }' 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.108 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.367 [2024-09-29 21:46:16.252368] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:57.367 "name": "Existed_Raid", 00:14:57.367 "aliases": [ 00:14:57.367 "76bc5071-8630-4e40-9cf3-399e877956d2" 00:14:57.367 ], 00:14:57.367 "product_name": "Raid Volume", 00:14:57.367 "block_size": 512, 00:14:57.367 "num_blocks": 126976, 00:14:57.367 "uuid": "76bc5071-8630-4e40-9cf3-399e877956d2", 00:14:57.367 "assigned_rate_limits": { 00:14:57.367 "rw_ios_per_sec": 0, 00:14:57.367 "rw_mbytes_per_sec": 0, 00:14:57.367 "r_mbytes_per_sec": 0, 00:14:57.367 "w_mbytes_per_sec": 0 00:14:57.367 }, 00:14:57.367 "claimed": false, 00:14:57.367 "zoned": false, 00:14:57.367 "supported_io_types": { 00:14:57.367 "read": true, 00:14:57.367 "write": true, 00:14:57.367 "unmap": false, 00:14:57.367 "flush": false, 00:14:57.367 "reset": true, 00:14:57.367 "nvme_admin": false, 00:14:57.367 "nvme_io": false, 00:14:57.367 "nvme_io_md": false, 00:14:57.367 "write_zeroes": true, 00:14:57.367 "zcopy": false, 00:14:57.367 "get_zone_info": false, 00:14:57.367 "zone_management": false, 00:14:57.367 "zone_append": false, 00:14:57.367 "compare": false, 00:14:57.367 "compare_and_write": false, 00:14:57.367 "abort": false, 00:14:57.367 "seek_hole": false, 00:14:57.367 "seek_data": false, 00:14:57.367 "copy": false, 00:14:57.367 "nvme_iov_md": false 00:14:57.367 }, 00:14:57.367 "driver_specific": { 00:14:57.367 "raid": { 00:14:57.367 "uuid": "76bc5071-8630-4e40-9cf3-399e877956d2", 00:14:57.367 "strip_size_kb": 64, 00:14:57.367 "state": "online", 00:14:57.367 "raid_level": "raid5f", 00:14:57.367 "superblock": true, 00:14:57.367 "num_base_bdevs": 3, 00:14:57.367 "num_base_bdevs_discovered": 3, 00:14:57.367 "num_base_bdevs_operational": 3, 00:14:57.367 "base_bdevs_list": [ 00:14:57.367 { 00:14:57.367 "name": "BaseBdev1", 00:14:57.367 "uuid": "feabeb6b-28d5-4f6c-904c-d34150c41ba8", 00:14:57.367 "is_configured": true, 00:14:57.367 "data_offset": 2048, 00:14:57.367 "data_size": 63488 00:14:57.367 }, 00:14:57.367 { 00:14:57.367 "name": "BaseBdev2", 00:14:57.367 "uuid": "d56f1394-901e-47aa-83a1-cbf7caa86ffa", 00:14:57.367 "is_configured": true, 00:14:57.367 "data_offset": 2048, 00:14:57.367 "data_size": 63488 00:14:57.367 }, 00:14:57.367 { 00:14:57.367 "name": "BaseBdev3", 00:14:57.367 "uuid": "c806ae5e-20c9-424a-8af3-071400270c8e", 00:14:57.367 "is_configured": true, 00:14:57.367 "data_offset": 2048, 00:14:57.367 "data_size": 63488 00:14:57.367 } 00:14:57.367 ] 00:14:57.367 } 00:14:57.367 } 00:14:57.367 }' 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:57.367 BaseBdev2 00:14:57.367 BaseBdev3' 00:14:57.367 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.626 [2024-09-29 21:46:16.515760] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.626 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.886 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.886 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.886 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.886 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.886 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.886 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.886 "name": "Existed_Raid", 00:14:57.886 "uuid": "76bc5071-8630-4e40-9cf3-399e877956d2", 00:14:57.886 "strip_size_kb": 64, 00:14:57.886 "state": "online", 00:14:57.886 "raid_level": "raid5f", 00:14:57.886 "superblock": true, 00:14:57.886 "num_base_bdevs": 3, 00:14:57.886 "num_base_bdevs_discovered": 2, 00:14:57.886 "num_base_bdevs_operational": 2, 00:14:57.886 "base_bdevs_list": [ 00:14:57.886 { 00:14:57.886 "name": null, 00:14:57.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.886 "is_configured": false, 00:14:57.886 "data_offset": 0, 00:14:57.886 "data_size": 63488 00:14:57.886 }, 00:14:57.886 { 00:14:57.886 "name": "BaseBdev2", 00:14:57.886 "uuid": "d56f1394-901e-47aa-83a1-cbf7caa86ffa", 00:14:57.886 "is_configured": true, 00:14:57.886 "data_offset": 2048, 00:14:57.886 "data_size": 63488 00:14:57.886 }, 00:14:57.886 { 00:14:57.886 "name": "BaseBdev3", 00:14:57.886 "uuid": "c806ae5e-20c9-424a-8af3-071400270c8e", 00:14:57.886 "is_configured": true, 00:14:57.886 "data_offset": 2048, 00:14:57.886 "data_size": 63488 00:14:57.886 } 00:14:57.886 ] 00:14:57.886 }' 00:14:57.886 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.886 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.146 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.146 [2024-09-29 21:46:17.110727] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.146 [2024-09-29 21:46:17.110944] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.405 [2024-09-29 21:46:17.200073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.405 [2024-09-29 21:46:17.255940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:58.405 [2024-09-29 21:46:17.256074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.405 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 BaseBdev2 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 [ 00:14:58.666 { 00:14:58.666 "name": "BaseBdev2", 00:14:58.666 "aliases": [ 00:14:58.666 "026cf5f2-683d-4d16-b9b5-39ff0c92cf66" 00:14:58.666 ], 00:14:58.666 "product_name": "Malloc disk", 00:14:58.666 "block_size": 512, 00:14:58.666 "num_blocks": 65536, 00:14:58.666 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:14:58.666 "assigned_rate_limits": { 00:14:58.666 "rw_ios_per_sec": 0, 00:14:58.666 "rw_mbytes_per_sec": 0, 00:14:58.666 "r_mbytes_per_sec": 0, 00:14:58.666 "w_mbytes_per_sec": 0 00:14:58.666 }, 00:14:58.666 "claimed": false, 00:14:58.666 "zoned": false, 00:14:58.666 "supported_io_types": { 00:14:58.666 "read": true, 00:14:58.666 "write": true, 00:14:58.666 "unmap": true, 00:14:58.666 "flush": true, 00:14:58.666 "reset": true, 00:14:58.666 "nvme_admin": false, 00:14:58.666 "nvme_io": false, 00:14:58.666 "nvme_io_md": false, 00:14:58.666 "write_zeroes": true, 00:14:58.666 "zcopy": true, 00:14:58.666 "get_zone_info": false, 00:14:58.666 "zone_management": false, 00:14:58.666 "zone_append": false, 00:14:58.666 "compare": false, 00:14:58.666 "compare_and_write": false, 00:14:58.666 "abort": true, 00:14:58.666 "seek_hole": false, 00:14:58.666 "seek_data": false, 00:14:58.666 "copy": true, 00:14:58.666 "nvme_iov_md": false 00:14:58.666 }, 00:14:58.666 "memory_domains": [ 00:14:58.666 { 00:14:58.666 "dma_device_id": "system", 00:14:58.666 "dma_device_type": 1 00:14:58.666 }, 00:14:58.666 { 00:14:58.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.666 "dma_device_type": 2 00:14:58.666 } 00:14:58.666 ], 00:14:58.666 "driver_specific": {} 00:14:58.666 } 00:14:58.666 ] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 BaseBdev3 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 [ 00:14:58.666 { 00:14:58.666 "name": "BaseBdev3", 00:14:58.666 "aliases": [ 00:14:58.666 "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00" 00:14:58.666 ], 00:14:58.666 "product_name": "Malloc disk", 00:14:58.666 "block_size": 512, 00:14:58.666 "num_blocks": 65536, 00:14:58.666 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:14:58.666 "assigned_rate_limits": { 00:14:58.666 "rw_ios_per_sec": 0, 00:14:58.666 "rw_mbytes_per_sec": 0, 00:14:58.666 "r_mbytes_per_sec": 0, 00:14:58.666 "w_mbytes_per_sec": 0 00:14:58.666 }, 00:14:58.666 "claimed": false, 00:14:58.666 "zoned": false, 00:14:58.666 "supported_io_types": { 00:14:58.666 "read": true, 00:14:58.666 "write": true, 00:14:58.666 "unmap": true, 00:14:58.666 "flush": true, 00:14:58.666 "reset": true, 00:14:58.666 "nvme_admin": false, 00:14:58.666 "nvme_io": false, 00:14:58.666 "nvme_io_md": false, 00:14:58.666 "write_zeroes": true, 00:14:58.666 "zcopy": true, 00:14:58.666 "get_zone_info": false, 00:14:58.666 "zone_management": false, 00:14:58.666 "zone_append": false, 00:14:58.666 "compare": false, 00:14:58.666 "compare_and_write": false, 00:14:58.666 "abort": true, 00:14:58.666 "seek_hole": false, 00:14:58.666 "seek_data": false, 00:14:58.666 "copy": true, 00:14:58.666 "nvme_iov_md": false 00:14:58.666 }, 00:14:58.666 "memory_domains": [ 00:14:58.666 { 00:14:58.666 "dma_device_id": "system", 00:14:58.666 "dma_device_type": 1 00:14:58.666 }, 00:14:58.666 { 00:14:58.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.666 "dma_device_type": 2 00:14:58.666 } 00:14:58.666 ], 00:14:58.666 "driver_specific": {} 00:14:58.666 } 00:14:58.666 ] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.666 [2024-09-29 21:46:17.563888] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:58.666 [2024-09-29 21:46:17.564015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:58.666 [2024-09-29 21:46:17.564067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.666 [2024-09-29 21:46:17.565692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.666 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.667 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.667 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.667 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.667 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.667 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.667 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.667 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.667 "name": "Existed_Raid", 00:14:58.667 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:14:58.667 "strip_size_kb": 64, 00:14:58.667 "state": "configuring", 00:14:58.667 "raid_level": "raid5f", 00:14:58.667 "superblock": true, 00:14:58.667 "num_base_bdevs": 3, 00:14:58.667 "num_base_bdevs_discovered": 2, 00:14:58.667 "num_base_bdevs_operational": 3, 00:14:58.667 "base_bdevs_list": [ 00:14:58.667 { 00:14:58.667 "name": "BaseBdev1", 00:14:58.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.667 "is_configured": false, 00:14:58.667 "data_offset": 0, 00:14:58.667 "data_size": 0 00:14:58.667 }, 00:14:58.667 { 00:14:58.667 "name": "BaseBdev2", 00:14:58.667 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:14:58.667 "is_configured": true, 00:14:58.667 "data_offset": 2048, 00:14:58.667 "data_size": 63488 00:14:58.667 }, 00:14:58.667 { 00:14:58.667 "name": "BaseBdev3", 00:14:58.667 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:14:58.667 "is_configured": true, 00:14:58.667 "data_offset": 2048, 00:14:58.667 "data_size": 63488 00:14:58.667 } 00:14:58.667 ] 00:14:58.667 }' 00:14:58.667 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.667 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.237 [2024-09-29 21:46:18.015083] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.237 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.237 "name": "Existed_Raid", 00:14:59.237 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:14:59.237 "strip_size_kb": 64, 00:14:59.237 "state": "configuring", 00:14:59.237 "raid_level": "raid5f", 00:14:59.237 "superblock": true, 00:14:59.237 "num_base_bdevs": 3, 00:14:59.237 "num_base_bdevs_discovered": 1, 00:14:59.237 "num_base_bdevs_operational": 3, 00:14:59.237 "base_bdevs_list": [ 00:14:59.237 { 00:14:59.237 "name": "BaseBdev1", 00:14:59.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.237 "is_configured": false, 00:14:59.237 "data_offset": 0, 00:14:59.237 "data_size": 0 00:14:59.237 }, 00:14:59.237 { 00:14:59.237 "name": null, 00:14:59.237 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:14:59.237 "is_configured": false, 00:14:59.237 "data_offset": 0, 00:14:59.237 "data_size": 63488 00:14:59.237 }, 00:14:59.237 { 00:14:59.237 "name": "BaseBdev3", 00:14:59.237 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:14:59.237 "is_configured": true, 00:14:59.238 "data_offset": 2048, 00:14:59.238 "data_size": 63488 00:14:59.238 } 00:14:59.238 ] 00:14:59.238 }' 00:14:59.238 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.238 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.808 [2024-09-29 21:46:18.577115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.808 BaseBdev1 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.808 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.808 [ 00:14:59.808 { 00:14:59.808 "name": "BaseBdev1", 00:14:59.808 "aliases": [ 00:14:59.808 "7afc8115-0735-4060-9fc8-03c4f1779518" 00:14:59.808 ], 00:14:59.808 "product_name": "Malloc disk", 00:14:59.808 "block_size": 512, 00:14:59.808 "num_blocks": 65536, 00:14:59.808 "uuid": "7afc8115-0735-4060-9fc8-03c4f1779518", 00:14:59.808 "assigned_rate_limits": { 00:14:59.808 "rw_ios_per_sec": 0, 00:14:59.808 "rw_mbytes_per_sec": 0, 00:14:59.808 "r_mbytes_per_sec": 0, 00:14:59.808 "w_mbytes_per_sec": 0 00:14:59.808 }, 00:14:59.808 "claimed": true, 00:14:59.808 "claim_type": "exclusive_write", 00:14:59.808 "zoned": false, 00:14:59.808 "supported_io_types": { 00:14:59.808 "read": true, 00:14:59.808 "write": true, 00:14:59.808 "unmap": true, 00:14:59.808 "flush": true, 00:14:59.808 "reset": true, 00:14:59.808 "nvme_admin": false, 00:14:59.808 "nvme_io": false, 00:14:59.808 "nvme_io_md": false, 00:14:59.808 "write_zeroes": true, 00:14:59.808 "zcopy": true, 00:14:59.808 "get_zone_info": false, 00:14:59.808 "zone_management": false, 00:14:59.808 "zone_append": false, 00:14:59.808 "compare": false, 00:14:59.808 "compare_and_write": false, 00:14:59.808 "abort": true, 00:14:59.808 "seek_hole": false, 00:14:59.808 "seek_data": false, 00:14:59.808 "copy": true, 00:14:59.808 "nvme_iov_md": false 00:14:59.808 }, 00:14:59.808 "memory_domains": [ 00:14:59.808 { 00:14:59.808 "dma_device_id": "system", 00:14:59.809 "dma_device_type": 1 00:14:59.809 }, 00:14:59.809 { 00:14:59.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.809 "dma_device_type": 2 00:14:59.809 } 00:14:59.809 ], 00:14:59.809 "driver_specific": {} 00:14:59.809 } 00:14:59.809 ] 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.809 "name": "Existed_Raid", 00:14:59.809 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:14:59.809 "strip_size_kb": 64, 00:14:59.809 "state": "configuring", 00:14:59.809 "raid_level": "raid5f", 00:14:59.809 "superblock": true, 00:14:59.809 "num_base_bdevs": 3, 00:14:59.809 "num_base_bdevs_discovered": 2, 00:14:59.809 "num_base_bdevs_operational": 3, 00:14:59.809 "base_bdevs_list": [ 00:14:59.809 { 00:14:59.809 "name": "BaseBdev1", 00:14:59.809 "uuid": "7afc8115-0735-4060-9fc8-03c4f1779518", 00:14:59.809 "is_configured": true, 00:14:59.809 "data_offset": 2048, 00:14:59.809 "data_size": 63488 00:14:59.809 }, 00:14:59.809 { 00:14:59.809 "name": null, 00:14:59.809 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:14:59.809 "is_configured": false, 00:14:59.809 "data_offset": 0, 00:14:59.809 "data_size": 63488 00:14:59.809 }, 00:14:59.809 { 00:14:59.809 "name": "BaseBdev3", 00:14:59.809 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:14:59.809 "is_configured": true, 00:14:59.809 "data_offset": 2048, 00:14:59.809 "data_size": 63488 00:14:59.809 } 00:14:59.809 ] 00:14:59.809 }' 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.809 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.379 [2024-09-29 21:46:19.108303] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.379 "name": "Existed_Raid", 00:15:00.379 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:15:00.379 "strip_size_kb": 64, 00:15:00.379 "state": "configuring", 00:15:00.379 "raid_level": "raid5f", 00:15:00.379 "superblock": true, 00:15:00.379 "num_base_bdevs": 3, 00:15:00.379 "num_base_bdevs_discovered": 1, 00:15:00.379 "num_base_bdevs_operational": 3, 00:15:00.379 "base_bdevs_list": [ 00:15:00.379 { 00:15:00.379 "name": "BaseBdev1", 00:15:00.379 "uuid": "7afc8115-0735-4060-9fc8-03c4f1779518", 00:15:00.379 "is_configured": true, 00:15:00.379 "data_offset": 2048, 00:15:00.379 "data_size": 63488 00:15:00.379 }, 00:15:00.379 { 00:15:00.379 "name": null, 00:15:00.379 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:15:00.379 "is_configured": false, 00:15:00.379 "data_offset": 0, 00:15:00.379 "data_size": 63488 00:15:00.379 }, 00:15:00.379 { 00:15:00.379 "name": null, 00:15:00.379 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:15:00.379 "is_configured": false, 00:15:00.379 "data_offset": 0, 00:15:00.379 "data_size": 63488 00:15:00.379 } 00:15:00.379 ] 00:15:00.379 }' 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.379 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.640 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.640 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.640 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:00.640 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.640 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.640 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:00.640 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:00.640 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.640 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.640 [2024-09-29 21:46:19.619515] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.900 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.901 "name": "Existed_Raid", 00:15:00.901 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:15:00.901 "strip_size_kb": 64, 00:15:00.901 "state": "configuring", 00:15:00.901 "raid_level": "raid5f", 00:15:00.901 "superblock": true, 00:15:00.901 "num_base_bdevs": 3, 00:15:00.901 "num_base_bdevs_discovered": 2, 00:15:00.901 "num_base_bdevs_operational": 3, 00:15:00.901 "base_bdevs_list": [ 00:15:00.901 { 00:15:00.901 "name": "BaseBdev1", 00:15:00.901 "uuid": "7afc8115-0735-4060-9fc8-03c4f1779518", 00:15:00.901 "is_configured": true, 00:15:00.901 "data_offset": 2048, 00:15:00.901 "data_size": 63488 00:15:00.901 }, 00:15:00.901 { 00:15:00.901 "name": null, 00:15:00.901 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:15:00.901 "is_configured": false, 00:15:00.901 "data_offset": 0, 00:15:00.901 "data_size": 63488 00:15:00.901 }, 00:15:00.901 { 00:15:00.901 "name": "BaseBdev3", 00:15:00.901 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:15:00.901 "is_configured": true, 00:15:00.901 "data_offset": 2048, 00:15:00.901 "data_size": 63488 00:15:00.901 } 00:15:00.901 ] 00:15:00.901 }' 00:15:00.901 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.901 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.160 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.160 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.160 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.160 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.160 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.160 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:01.160 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:01.160 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.160 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.160 [2024-09-29 21:46:20.074774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.420 "name": "Existed_Raid", 00:15:01.420 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:15:01.420 "strip_size_kb": 64, 00:15:01.420 "state": "configuring", 00:15:01.420 "raid_level": "raid5f", 00:15:01.420 "superblock": true, 00:15:01.420 "num_base_bdevs": 3, 00:15:01.420 "num_base_bdevs_discovered": 1, 00:15:01.420 "num_base_bdevs_operational": 3, 00:15:01.420 "base_bdevs_list": [ 00:15:01.420 { 00:15:01.420 "name": null, 00:15:01.420 "uuid": "7afc8115-0735-4060-9fc8-03c4f1779518", 00:15:01.420 "is_configured": false, 00:15:01.420 "data_offset": 0, 00:15:01.420 "data_size": 63488 00:15:01.420 }, 00:15:01.420 { 00:15:01.420 "name": null, 00:15:01.420 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:15:01.420 "is_configured": false, 00:15:01.420 "data_offset": 0, 00:15:01.420 "data_size": 63488 00:15:01.420 }, 00:15:01.420 { 00:15:01.420 "name": "BaseBdev3", 00:15:01.420 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:15:01.420 "is_configured": true, 00:15:01.420 "data_offset": 2048, 00:15:01.420 "data_size": 63488 00:15:01.420 } 00:15:01.420 ] 00:15:01.420 }' 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.420 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.681 [2024-09-29 21:46:20.596569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.681 "name": "Existed_Raid", 00:15:01.681 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:15:01.681 "strip_size_kb": 64, 00:15:01.681 "state": "configuring", 00:15:01.681 "raid_level": "raid5f", 00:15:01.681 "superblock": true, 00:15:01.681 "num_base_bdevs": 3, 00:15:01.681 "num_base_bdevs_discovered": 2, 00:15:01.681 "num_base_bdevs_operational": 3, 00:15:01.681 "base_bdevs_list": [ 00:15:01.681 { 00:15:01.681 "name": null, 00:15:01.681 "uuid": "7afc8115-0735-4060-9fc8-03c4f1779518", 00:15:01.681 "is_configured": false, 00:15:01.681 "data_offset": 0, 00:15:01.681 "data_size": 63488 00:15:01.681 }, 00:15:01.681 { 00:15:01.681 "name": "BaseBdev2", 00:15:01.681 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:15:01.681 "is_configured": true, 00:15:01.681 "data_offset": 2048, 00:15:01.681 "data_size": 63488 00:15:01.681 }, 00:15:01.681 { 00:15:01.681 "name": "BaseBdev3", 00:15:01.681 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:15:01.681 "is_configured": true, 00:15:01.681 "data_offset": 2048, 00:15:01.681 "data_size": 63488 00:15:01.681 } 00:15:01.681 ] 00:15:01.681 }' 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.681 21:46:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:02.251 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7afc8115-0735-4060-9fc8-03c4f1779518 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.252 [2024-09-29 21:46:21.132434] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:02.252 [2024-09-29 21:46:21.132724] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:02.252 [2024-09-29 21:46:21.132775] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.252 [2024-09-29 21:46:21.133050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:02.252 NewBaseBdev 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.252 [2024-09-29 21:46:21.138248] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:02.252 [2024-09-29 21:46:21.138314] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:02.252 [2024-09-29 21:46:21.138494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.252 [ 00:15:02.252 { 00:15:02.252 "name": "NewBaseBdev", 00:15:02.252 "aliases": [ 00:15:02.252 "7afc8115-0735-4060-9fc8-03c4f1779518" 00:15:02.252 ], 00:15:02.252 "product_name": "Malloc disk", 00:15:02.252 "block_size": 512, 00:15:02.252 "num_blocks": 65536, 00:15:02.252 "uuid": "7afc8115-0735-4060-9fc8-03c4f1779518", 00:15:02.252 "assigned_rate_limits": { 00:15:02.252 "rw_ios_per_sec": 0, 00:15:02.252 "rw_mbytes_per_sec": 0, 00:15:02.252 "r_mbytes_per_sec": 0, 00:15:02.252 "w_mbytes_per_sec": 0 00:15:02.252 }, 00:15:02.252 "claimed": true, 00:15:02.252 "claim_type": "exclusive_write", 00:15:02.252 "zoned": false, 00:15:02.252 "supported_io_types": { 00:15:02.252 "read": true, 00:15:02.252 "write": true, 00:15:02.252 "unmap": true, 00:15:02.252 "flush": true, 00:15:02.252 "reset": true, 00:15:02.252 "nvme_admin": false, 00:15:02.252 "nvme_io": false, 00:15:02.252 "nvme_io_md": false, 00:15:02.252 "write_zeroes": true, 00:15:02.252 "zcopy": true, 00:15:02.252 "get_zone_info": false, 00:15:02.252 "zone_management": false, 00:15:02.252 "zone_append": false, 00:15:02.252 "compare": false, 00:15:02.252 "compare_and_write": false, 00:15:02.252 "abort": true, 00:15:02.252 "seek_hole": false, 00:15:02.252 "seek_data": false, 00:15:02.252 "copy": true, 00:15:02.252 "nvme_iov_md": false 00:15:02.252 }, 00:15:02.252 "memory_domains": [ 00:15:02.252 { 00:15:02.252 "dma_device_id": "system", 00:15:02.252 "dma_device_type": 1 00:15:02.252 }, 00:15:02.252 { 00:15:02.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.252 "dma_device_type": 2 00:15:02.252 } 00:15:02.252 ], 00:15:02.252 "driver_specific": {} 00:15:02.252 } 00:15:02.252 ] 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.252 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.512 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.512 "name": "Existed_Raid", 00:15:02.512 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:15:02.512 "strip_size_kb": 64, 00:15:02.512 "state": "online", 00:15:02.512 "raid_level": "raid5f", 00:15:02.512 "superblock": true, 00:15:02.512 "num_base_bdevs": 3, 00:15:02.512 "num_base_bdevs_discovered": 3, 00:15:02.512 "num_base_bdevs_operational": 3, 00:15:02.512 "base_bdevs_list": [ 00:15:02.512 { 00:15:02.512 "name": "NewBaseBdev", 00:15:02.512 "uuid": "7afc8115-0735-4060-9fc8-03c4f1779518", 00:15:02.512 "is_configured": true, 00:15:02.512 "data_offset": 2048, 00:15:02.512 "data_size": 63488 00:15:02.512 }, 00:15:02.512 { 00:15:02.512 "name": "BaseBdev2", 00:15:02.512 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:15:02.512 "is_configured": true, 00:15:02.512 "data_offset": 2048, 00:15:02.512 "data_size": 63488 00:15:02.512 }, 00:15:02.512 { 00:15:02.512 "name": "BaseBdev3", 00:15:02.512 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:15:02.512 "is_configured": true, 00:15:02.512 "data_offset": 2048, 00:15:02.512 "data_size": 63488 00:15:02.512 } 00:15:02.512 ] 00:15:02.512 }' 00:15:02.512 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.512 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.772 [2024-09-29 21:46:21.612089] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.772 "name": "Existed_Raid", 00:15:02.772 "aliases": [ 00:15:02.772 "0fc41224-f20d-4ec7-9fee-857a59833f7e" 00:15:02.772 ], 00:15:02.772 "product_name": "Raid Volume", 00:15:02.772 "block_size": 512, 00:15:02.772 "num_blocks": 126976, 00:15:02.772 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:15:02.772 "assigned_rate_limits": { 00:15:02.772 "rw_ios_per_sec": 0, 00:15:02.772 "rw_mbytes_per_sec": 0, 00:15:02.772 "r_mbytes_per_sec": 0, 00:15:02.772 "w_mbytes_per_sec": 0 00:15:02.772 }, 00:15:02.772 "claimed": false, 00:15:02.772 "zoned": false, 00:15:02.772 "supported_io_types": { 00:15:02.772 "read": true, 00:15:02.772 "write": true, 00:15:02.772 "unmap": false, 00:15:02.772 "flush": false, 00:15:02.772 "reset": true, 00:15:02.772 "nvme_admin": false, 00:15:02.772 "nvme_io": false, 00:15:02.772 "nvme_io_md": false, 00:15:02.772 "write_zeroes": true, 00:15:02.772 "zcopy": false, 00:15:02.772 "get_zone_info": false, 00:15:02.772 "zone_management": false, 00:15:02.772 "zone_append": false, 00:15:02.772 "compare": false, 00:15:02.772 "compare_and_write": false, 00:15:02.772 "abort": false, 00:15:02.772 "seek_hole": false, 00:15:02.772 "seek_data": false, 00:15:02.772 "copy": false, 00:15:02.772 "nvme_iov_md": false 00:15:02.772 }, 00:15:02.772 "driver_specific": { 00:15:02.772 "raid": { 00:15:02.772 "uuid": "0fc41224-f20d-4ec7-9fee-857a59833f7e", 00:15:02.772 "strip_size_kb": 64, 00:15:02.772 "state": "online", 00:15:02.772 "raid_level": "raid5f", 00:15:02.772 "superblock": true, 00:15:02.772 "num_base_bdevs": 3, 00:15:02.772 "num_base_bdevs_discovered": 3, 00:15:02.772 "num_base_bdevs_operational": 3, 00:15:02.772 "base_bdevs_list": [ 00:15:02.772 { 00:15:02.772 "name": "NewBaseBdev", 00:15:02.772 "uuid": "7afc8115-0735-4060-9fc8-03c4f1779518", 00:15:02.772 "is_configured": true, 00:15:02.772 "data_offset": 2048, 00:15:02.772 "data_size": 63488 00:15:02.772 }, 00:15:02.772 { 00:15:02.772 "name": "BaseBdev2", 00:15:02.772 "uuid": "026cf5f2-683d-4d16-b9b5-39ff0c92cf66", 00:15:02.772 "is_configured": true, 00:15:02.772 "data_offset": 2048, 00:15:02.772 "data_size": 63488 00:15:02.772 }, 00:15:02.772 { 00:15:02.772 "name": "BaseBdev3", 00:15:02.772 "uuid": "f7540f1a-6342-42a9-9a6e-d2b1b31fdb00", 00:15:02.772 "is_configured": true, 00:15:02.772 "data_offset": 2048, 00:15:02.772 "data_size": 63488 00:15:02.772 } 00:15:02.772 ] 00:15:02.772 } 00:15:02.772 } 00:15:02.772 }' 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:02.772 BaseBdev2 00:15:02.772 BaseBdev3' 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.772 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.032 [2024-09-29 21:46:21.875434] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.032 [2024-09-29 21:46:21.875502] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.032 [2024-09-29 21:46:21.875580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.032 [2024-09-29 21:46:21.875836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.032 [2024-09-29 21:46:21.875888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80558 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80558 ']' 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80558 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80558 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80558' 00:15:03.032 killing process with pid 80558 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80558 00:15:03.032 [2024-09-29 21:46:21.926766] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.032 21:46:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80558 00:15:03.292 [2024-09-29 21:46:22.207674] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.673 21:46:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:04.673 00:15:04.673 real 0m10.522s 00:15:04.673 user 0m16.598s 00:15:04.673 sys 0m2.035s 00:15:04.673 21:46:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.673 21:46:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.673 ************************************ 00:15:04.673 END TEST raid5f_state_function_test_sb 00:15:04.673 ************************************ 00:15:04.673 21:46:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:04.673 21:46:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:04.673 21:46:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.673 21:46:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.673 ************************************ 00:15:04.673 START TEST raid5f_superblock_test 00:15:04.673 ************************************ 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81183 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81183 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81183 ']' 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.673 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.673 [2024-09-29 21:46:23.576353] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:04.674 [2024-09-29 21:46:23.576589] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81183 ] 00:15:04.934 [2024-09-29 21:46:23.747099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.194 [2024-09-29 21:46:23.948294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.194 [2024-09-29 21:46:24.156767] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.194 [2024-09-29 21:46:24.156804] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.453 malloc1 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.453 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.453 [2024-09-29 21:46:24.426643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:05.453 [2024-09-29 21:46:24.426792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.454 [2024-09-29 21:46:24.426832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:05.454 [2024-09-29 21:46:24.426863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.454 [2024-09-29 21:46:24.428743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.454 [2024-09-29 21:46:24.428813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:05.454 pt1 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.454 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.714 malloc2 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.714 [2024-09-29 21:46:24.515649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.714 [2024-09-29 21:46:24.515754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.714 [2024-09-29 21:46:24.515793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:05.714 [2024-09-29 21:46:24.515819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.714 [2024-09-29 21:46:24.517729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.714 [2024-09-29 21:46:24.517800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.714 pt2 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.714 malloc3 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.714 [2024-09-29 21:46:24.570853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:05.714 [2024-09-29 21:46:24.570954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.714 [2024-09-29 21:46:24.570991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:05.714 [2024-09-29 21:46:24.571020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.714 [2024-09-29 21:46:24.573027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.714 [2024-09-29 21:46:24.573108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:05.714 pt3 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.714 [2024-09-29 21:46:24.582914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:05.714 [2024-09-29 21:46:24.584681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.714 [2024-09-29 21:46:24.584784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:05.714 [2024-09-29 21:46:24.584967] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:05.714 [2024-09-29 21:46:24.585024] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:05.714 [2024-09-29 21:46:24.585281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:05.714 [2024-09-29 21:46:24.590199] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:05.714 [2024-09-29 21:46:24.590250] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:05.714 [2024-09-29 21:46:24.590447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.714 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.714 "name": "raid_bdev1", 00:15:05.714 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:05.714 "strip_size_kb": 64, 00:15:05.714 "state": "online", 00:15:05.714 "raid_level": "raid5f", 00:15:05.714 "superblock": true, 00:15:05.714 "num_base_bdevs": 3, 00:15:05.715 "num_base_bdevs_discovered": 3, 00:15:05.715 "num_base_bdevs_operational": 3, 00:15:05.715 "base_bdevs_list": [ 00:15:05.715 { 00:15:05.715 "name": "pt1", 00:15:05.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.715 "is_configured": true, 00:15:05.715 "data_offset": 2048, 00:15:05.715 "data_size": 63488 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "name": "pt2", 00:15:05.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.715 "is_configured": true, 00:15:05.715 "data_offset": 2048, 00:15:05.715 "data_size": 63488 00:15:05.715 }, 00:15:05.715 { 00:15:05.715 "name": "pt3", 00:15:05.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.715 "is_configured": true, 00:15:05.715 "data_offset": 2048, 00:15:05.715 "data_size": 63488 00:15:05.715 } 00:15:05.715 ] 00:15:05.715 }' 00:15:05.715 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.715 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:06.283 [2024-09-29 21:46:25.052225] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:06.283 "name": "raid_bdev1", 00:15:06.283 "aliases": [ 00:15:06.283 "f5ab38c8-345c-4bac-981f-3cf56c925030" 00:15:06.283 ], 00:15:06.283 "product_name": "Raid Volume", 00:15:06.283 "block_size": 512, 00:15:06.283 "num_blocks": 126976, 00:15:06.283 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:06.283 "assigned_rate_limits": { 00:15:06.283 "rw_ios_per_sec": 0, 00:15:06.283 "rw_mbytes_per_sec": 0, 00:15:06.283 "r_mbytes_per_sec": 0, 00:15:06.283 "w_mbytes_per_sec": 0 00:15:06.283 }, 00:15:06.283 "claimed": false, 00:15:06.283 "zoned": false, 00:15:06.283 "supported_io_types": { 00:15:06.283 "read": true, 00:15:06.283 "write": true, 00:15:06.283 "unmap": false, 00:15:06.283 "flush": false, 00:15:06.283 "reset": true, 00:15:06.283 "nvme_admin": false, 00:15:06.283 "nvme_io": false, 00:15:06.283 "nvme_io_md": false, 00:15:06.283 "write_zeroes": true, 00:15:06.283 "zcopy": false, 00:15:06.283 "get_zone_info": false, 00:15:06.283 "zone_management": false, 00:15:06.283 "zone_append": false, 00:15:06.283 "compare": false, 00:15:06.283 "compare_and_write": false, 00:15:06.283 "abort": false, 00:15:06.283 "seek_hole": false, 00:15:06.283 "seek_data": false, 00:15:06.283 "copy": false, 00:15:06.283 "nvme_iov_md": false 00:15:06.283 }, 00:15:06.283 "driver_specific": { 00:15:06.283 "raid": { 00:15:06.283 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:06.283 "strip_size_kb": 64, 00:15:06.283 "state": "online", 00:15:06.283 "raid_level": "raid5f", 00:15:06.283 "superblock": true, 00:15:06.283 "num_base_bdevs": 3, 00:15:06.283 "num_base_bdevs_discovered": 3, 00:15:06.283 "num_base_bdevs_operational": 3, 00:15:06.283 "base_bdevs_list": [ 00:15:06.283 { 00:15:06.283 "name": "pt1", 00:15:06.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.283 "is_configured": true, 00:15:06.283 "data_offset": 2048, 00:15:06.283 "data_size": 63488 00:15:06.283 }, 00:15:06.283 { 00:15:06.283 "name": "pt2", 00:15:06.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.283 "is_configured": true, 00:15:06.283 "data_offset": 2048, 00:15:06.283 "data_size": 63488 00:15:06.283 }, 00:15:06.283 { 00:15:06.283 "name": "pt3", 00:15:06.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.283 "is_configured": true, 00:15:06.283 "data_offset": 2048, 00:15:06.283 "data_size": 63488 00:15:06.283 } 00:15:06.283 ] 00:15:06.283 } 00:15:06.283 } 00:15:06.283 }' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:06.283 pt2 00:15:06.283 pt3' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.283 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.543 [2024-09-29 21:46:25.323694] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f5ab38c8-345c-4bac-981f-3cf56c925030 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f5ab38c8-345c-4bac-981f-3cf56c925030 ']' 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.543 [2024-09-29 21:46:25.371459] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.543 [2024-09-29 21:46:25.371529] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.543 [2024-09-29 21:46:25.371603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.543 [2024-09-29 21:46:25.371674] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.543 [2024-09-29 21:46:25.371726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.543 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.803 [2024-09-29 21:46:25.531208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:06.803 [2024-09-29 21:46:25.532908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:06.803 [2024-09-29 21:46:25.532995] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:06.803 [2024-09-29 21:46:25.533067] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:06.803 [2024-09-29 21:46:25.533167] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:06.803 [2024-09-29 21:46:25.533216] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:06.803 [2024-09-29 21:46:25.533274] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.803 [2024-09-29 21:46:25.533285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:06.803 request: 00:15:06.803 { 00:15:06.803 "name": "raid_bdev1", 00:15:06.803 "raid_level": "raid5f", 00:15:06.803 "base_bdevs": [ 00:15:06.803 "malloc1", 00:15:06.803 "malloc2", 00:15:06.803 "malloc3" 00:15:06.803 ], 00:15:06.803 "strip_size_kb": 64, 00:15:06.803 "superblock": false, 00:15:06.803 "method": "bdev_raid_create", 00:15:06.803 "req_id": 1 00:15:06.803 } 00:15:06.803 Got JSON-RPC error response 00:15:06.803 response: 00:15:06.803 { 00:15:06.803 "code": -17, 00:15:06.803 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:06.803 } 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.803 [2024-09-29 21:46:25.599138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:06.803 [2024-09-29 21:46:25.599226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.803 [2024-09-29 21:46:25.599257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:06.803 [2024-09-29 21:46:25.599284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.803 [2024-09-29 21:46:25.601169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.803 [2024-09-29 21:46:25.601232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:06.803 [2024-09-29 21:46:25.601313] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:06.803 [2024-09-29 21:46:25.601383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:06.803 pt1 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.803 "name": "raid_bdev1", 00:15:06.803 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:06.803 "strip_size_kb": 64, 00:15:06.803 "state": "configuring", 00:15:06.803 "raid_level": "raid5f", 00:15:06.803 "superblock": true, 00:15:06.803 "num_base_bdevs": 3, 00:15:06.803 "num_base_bdevs_discovered": 1, 00:15:06.803 "num_base_bdevs_operational": 3, 00:15:06.803 "base_bdevs_list": [ 00:15:06.803 { 00:15:06.803 "name": "pt1", 00:15:06.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.803 "is_configured": true, 00:15:06.803 "data_offset": 2048, 00:15:06.803 "data_size": 63488 00:15:06.803 }, 00:15:06.803 { 00:15:06.803 "name": null, 00:15:06.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.803 "is_configured": false, 00:15:06.803 "data_offset": 2048, 00:15:06.803 "data_size": 63488 00:15:06.803 }, 00:15:06.803 { 00:15:06.803 "name": null, 00:15:06.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.803 "is_configured": false, 00:15:06.803 "data_offset": 2048, 00:15:06.803 "data_size": 63488 00:15:06.803 } 00:15:06.803 ] 00:15:06.803 }' 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.803 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.372 [2024-09-29 21:46:26.070298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.372 [2024-09-29 21:46:26.070388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.372 [2024-09-29 21:46:26.070425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:07.372 [2024-09-29 21:46:26.070452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.372 [2024-09-29 21:46:26.070766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.372 [2024-09-29 21:46:26.070817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.372 [2024-09-29 21:46:26.070894] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:07.372 [2024-09-29 21:46:26.070937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.372 pt2 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.372 [2024-09-29 21:46:26.082304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.372 "name": "raid_bdev1", 00:15:07.372 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:07.372 "strip_size_kb": 64, 00:15:07.372 "state": "configuring", 00:15:07.372 "raid_level": "raid5f", 00:15:07.372 "superblock": true, 00:15:07.372 "num_base_bdevs": 3, 00:15:07.372 "num_base_bdevs_discovered": 1, 00:15:07.372 "num_base_bdevs_operational": 3, 00:15:07.372 "base_bdevs_list": [ 00:15:07.372 { 00:15:07.372 "name": "pt1", 00:15:07.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.372 "is_configured": true, 00:15:07.372 "data_offset": 2048, 00:15:07.372 "data_size": 63488 00:15:07.372 }, 00:15:07.372 { 00:15:07.372 "name": null, 00:15:07.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.372 "is_configured": false, 00:15:07.372 "data_offset": 0, 00:15:07.372 "data_size": 63488 00:15:07.372 }, 00:15:07.372 { 00:15:07.372 "name": null, 00:15:07.372 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.372 "is_configured": false, 00:15:07.372 "data_offset": 2048, 00:15:07.372 "data_size": 63488 00:15:07.372 } 00:15:07.372 ] 00:15:07.372 }' 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.372 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.632 [2024-09-29 21:46:26.525552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.632 [2024-09-29 21:46:26.525640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.632 [2024-09-29 21:46:26.525667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:07.632 [2024-09-29 21:46:26.525691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.632 [2024-09-29 21:46:26.526006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.632 [2024-09-29 21:46:26.526076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.632 [2024-09-29 21:46:26.526153] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:07.632 [2024-09-29 21:46:26.526201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.632 pt2 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.632 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.632 [2024-09-29 21:46:26.537555] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:07.632 [2024-09-29 21:46:26.537641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.632 [2024-09-29 21:46:26.537667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:07.633 [2024-09-29 21:46:26.537692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.633 [2024-09-29 21:46:26.538010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.633 [2024-09-29 21:46:26.538083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:07.633 [2024-09-29 21:46:26.538165] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:07.633 [2024-09-29 21:46:26.538213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:07.633 [2024-09-29 21:46:26.538348] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:07.633 [2024-09-29 21:46:26.538386] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:07.633 [2024-09-29 21:46:26.538611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:07.633 [2024-09-29 21:46:26.543490] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:07.633 [2024-09-29 21:46:26.543541] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:07.633 [2024-09-29 21:46:26.543708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.633 pt3 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.633 "name": "raid_bdev1", 00:15:07.633 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:07.633 "strip_size_kb": 64, 00:15:07.633 "state": "online", 00:15:07.633 "raid_level": "raid5f", 00:15:07.633 "superblock": true, 00:15:07.633 "num_base_bdevs": 3, 00:15:07.633 "num_base_bdevs_discovered": 3, 00:15:07.633 "num_base_bdevs_operational": 3, 00:15:07.633 "base_bdevs_list": [ 00:15:07.633 { 00:15:07.633 "name": "pt1", 00:15:07.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.633 "is_configured": true, 00:15:07.633 "data_offset": 2048, 00:15:07.633 "data_size": 63488 00:15:07.633 }, 00:15:07.633 { 00:15:07.633 "name": "pt2", 00:15:07.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.633 "is_configured": true, 00:15:07.633 "data_offset": 2048, 00:15:07.633 "data_size": 63488 00:15:07.633 }, 00:15:07.633 { 00:15:07.633 "name": "pt3", 00:15:07.633 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.633 "is_configured": true, 00:15:07.633 "data_offset": 2048, 00:15:07.633 "data_size": 63488 00:15:07.633 } 00:15:07.633 ] 00:15:07.633 }' 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.633 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.203 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.203 [2024-09-29 21:46:26.984635] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.203 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.203 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.203 "name": "raid_bdev1", 00:15:08.203 "aliases": [ 00:15:08.203 "f5ab38c8-345c-4bac-981f-3cf56c925030" 00:15:08.203 ], 00:15:08.203 "product_name": "Raid Volume", 00:15:08.203 "block_size": 512, 00:15:08.203 "num_blocks": 126976, 00:15:08.203 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:08.203 "assigned_rate_limits": { 00:15:08.203 "rw_ios_per_sec": 0, 00:15:08.203 "rw_mbytes_per_sec": 0, 00:15:08.203 "r_mbytes_per_sec": 0, 00:15:08.203 "w_mbytes_per_sec": 0 00:15:08.203 }, 00:15:08.203 "claimed": false, 00:15:08.203 "zoned": false, 00:15:08.203 "supported_io_types": { 00:15:08.203 "read": true, 00:15:08.203 "write": true, 00:15:08.203 "unmap": false, 00:15:08.203 "flush": false, 00:15:08.203 "reset": true, 00:15:08.203 "nvme_admin": false, 00:15:08.203 "nvme_io": false, 00:15:08.203 "nvme_io_md": false, 00:15:08.203 "write_zeroes": true, 00:15:08.203 "zcopy": false, 00:15:08.203 "get_zone_info": false, 00:15:08.203 "zone_management": false, 00:15:08.203 "zone_append": false, 00:15:08.203 "compare": false, 00:15:08.203 "compare_and_write": false, 00:15:08.203 "abort": false, 00:15:08.203 "seek_hole": false, 00:15:08.203 "seek_data": false, 00:15:08.203 "copy": false, 00:15:08.203 "nvme_iov_md": false 00:15:08.203 }, 00:15:08.203 "driver_specific": { 00:15:08.203 "raid": { 00:15:08.203 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:08.203 "strip_size_kb": 64, 00:15:08.203 "state": "online", 00:15:08.203 "raid_level": "raid5f", 00:15:08.203 "superblock": true, 00:15:08.203 "num_base_bdevs": 3, 00:15:08.203 "num_base_bdevs_discovered": 3, 00:15:08.203 "num_base_bdevs_operational": 3, 00:15:08.203 "base_bdevs_list": [ 00:15:08.203 { 00:15:08.203 "name": "pt1", 00:15:08.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.204 "is_configured": true, 00:15:08.204 "data_offset": 2048, 00:15:08.204 "data_size": 63488 00:15:08.204 }, 00:15:08.204 { 00:15:08.204 "name": "pt2", 00:15:08.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.204 "is_configured": true, 00:15:08.204 "data_offset": 2048, 00:15:08.204 "data_size": 63488 00:15:08.204 }, 00:15:08.204 { 00:15:08.204 "name": "pt3", 00:15:08.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.204 "is_configured": true, 00:15:08.204 "data_offset": 2048, 00:15:08.204 "data_size": 63488 00:15:08.204 } 00:15:08.204 ] 00:15:08.204 } 00:15:08.204 } 00:15:08.204 }' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:08.204 pt2 00:15:08.204 pt3' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.204 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.463 [2024-09-29 21:46:27.232268] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f5ab38c8-345c-4bac-981f-3cf56c925030 '!=' f5ab38c8-345c-4bac-981f-3cf56c925030 ']' 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.463 [2024-09-29 21:46:27.276165] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.463 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.463 "name": "raid_bdev1", 00:15:08.463 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:08.463 "strip_size_kb": 64, 00:15:08.463 "state": "online", 00:15:08.463 "raid_level": "raid5f", 00:15:08.463 "superblock": true, 00:15:08.463 "num_base_bdevs": 3, 00:15:08.464 "num_base_bdevs_discovered": 2, 00:15:08.464 "num_base_bdevs_operational": 2, 00:15:08.464 "base_bdevs_list": [ 00:15:08.464 { 00:15:08.464 "name": null, 00:15:08.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.464 "is_configured": false, 00:15:08.464 "data_offset": 0, 00:15:08.464 "data_size": 63488 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "name": "pt2", 00:15:08.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.464 "is_configured": true, 00:15:08.464 "data_offset": 2048, 00:15:08.464 "data_size": 63488 00:15:08.464 }, 00:15:08.464 { 00:15:08.464 "name": "pt3", 00:15:08.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.464 "is_configured": true, 00:15:08.464 "data_offset": 2048, 00:15:08.464 "data_size": 63488 00:15:08.464 } 00:15:08.464 ] 00:15:08.464 }' 00:15:08.464 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.464 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.034 [2024-09-29 21:46:27.731364] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.034 [2024-09-29 21:46:27.731432] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.034 [2024-09-29 21:46:27.731493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.034 [2024-09-29 21:46:27.731546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.034 [2024-09-29 21:46:27.731579] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.034 [2024-09-29 21:46:27.819200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.034 [2024-09-29 21:46:27.819289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.034 [2024-09-29 21:46:27.819317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:09.034 [2024-09-29 21:46:27.819346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.034 [2024-09-29 21:46:27.821291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.034 [2024-09-29 21:46:27.821363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.034 [2024-09-29 21:46:27.821441] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.034 [2024-09-29 21:46:27.821494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.034 pt2 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.034 "name": "raid_bdev1", 00:15:09.034 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:09.034 "strip_size_kb": 64, 00:15:09.034 "state": "configuring", 00:15:09.034 "raid_level": "raid5f", 00:15:09.034 "superblock": true, 00:15:09.034 "num_base_bdevs": 3, 00:15:09.034 "num_base_bdevs_discovered": 1, 00:15:09.034 "num_base_bdevs_operational": 2, 00:15:09.034 "base_bdevs_list": [ 00:15:09.034 { 00:15:09.034 "name": null, 00:15:09.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.034 "is_configured": false, 00:15:09.034 "data_offset": 2048, 00:15:09.034 "data_size": 63488 00:15:09.034 }, 00:15:09.034 { 00:15:09.034 "name": "pt2", 00:15:09.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.034 "is_configured": true, 00:15:09.034 "data_offset": 2048, 00:15:09.034 "data_size": 63488 00:15:09.034 }, 00:15:09.034 { 00:15:09.034 "name": null, 00:15:09.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.034 "is_configured": false, 00:15:09.034 "data_offset": 2048, 00:15:09.034 "data_size": 63488 00:15:09.034 } 00:15:09.034 ] 00:15:09.034 }' 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.034 21:46:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.604 [2024-09-29 21:46:28.290482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:09.604 [2024-09-29 21:46:28.290567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.604 [2024-09-29 21:46:28.290598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:09.604 [2024-09-29 21:46:28.290626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.604 [2024-09-29 21:46:28.290974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.604 [2024-09-29 21:46:28.291041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:09.604 [2024-09-29 21:46:28.291121] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:09.604 [2024-09-29 21:46:28.291178] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:09.604 [2024-09-29 21:46:28.291302] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:09.604 [2024-09-29 21:46:28.291340] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:09.604 [2024-09-29 21:46:28.291546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:09.604 [2024-09-29 21:46:28.296160] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:09.604 [2024-09-29 21:46:28.296212] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:09.604 [2024-09-29 21:46:28.296489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.604 pt3 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.604 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.605 "name": "raid_bdev1", 00:15:09.605 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:09.605 "strip_size_kb": 64, 00:15:09.605 "state": "online", 00:15:09.605 "raid_level": "raid5f", 00:15:09.605 "superblock": true, 00:15:09.605 "num_base_bdevs": 3, 00:15:09.605 "num_base_bdevs_discovered": 2, 00:15:09.605 "num_base_bdevs_operational": 2, 00:15:09.605 "base_bdevs_list": [ 00:15:09.605 { 00:15:09.605 "name": null, 00:15:09.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.605 "is_configured": false, 00:15:09.605 "data_offset": 2048, 00:15:09.605 "data_size": 63488 00:15:09.605 }, 00:15:09.605 { 00:15:09.605 "name": "pt2", 00:15:09.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.605 "is_configured": true, 00:15:09.605 "data_offset": 2048, 00:15:09.605 "data_size": 63488 00:15:09.605 }, 00:15:09.605 { 00:15:09.605 "name": "pt3", 00:15:09.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:09.605 "is_configured": true, 00:15:09.605 "data_offset": 2048, 00:15:09.605 "data_size": 63488 00:15:09.605 } 00:15:09.605 ] 00:15:09.605 }' 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.605 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.865 [2024-09-29 21:46:28.773519] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.865 [2024-09-29 21:46:28.773590] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.865 [2024-09-29 21:46:28.773653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.865 [2024-09-29 21:46:28.773712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.865 [2024-09-29 21:46:28.773741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.865 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.125 [2024-09-29 21:46:28.849411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:10.125 [2024-09-29 21:46:28.849500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.125 [2024-09-29 21:46:28.849529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:10.125 [2024-09-29 21:46:28.849553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.125 [2024-09-29 21:46:28.851555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.125 [2024-09-29 21:46:28.851624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:10.125 [2024-09-29 21:46:28.851705] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:10.125 [2024-09-29 21:46:28.851756] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:10.125 [2024-09-29 21:46:28.851881] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:10.125 [2024-09-29 21:46:28.851930] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.125 [2024-09-29 21:46:28.851979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:10.125 [2024-09-29 21:46:28.852093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:10.125 pt1 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.125 "name": "raid_bdev1", 00:15:10.125 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:10.125 "strip_size_kb": 64, 00:15:10.125 "state": "configuring", 00:15:10.125 "raid_level": "raid5f", 00:15:10.125 "superblock": true, 00:15:10.125 "num_base_bdevs": 3, 00:15:10.125 "num_base_bdevs_discovered": 1, 00:15:10.125 "num_base_bdevs_operational": 2, 00:15:10.125 "base_bdevs_list": [ 00:15:10.125 { 00:15:10.125 "name": null, 00:15:10.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.125 "is_configured": false, 00:15:10.125 "data_offset": 2048, 00:15:10.125 "data_size": 63488 00:15:10.125 }, 00:15:10.125 { 00:15:10.125 "name": "pt2", 00:15:10.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.125 "is_configured": true, 00:15:10.125 "data_offset": 2048, 00:15:10.125 "data_size": 63488 00:15:10.125 }, 00:15:10.125 { 00:15:10.125 "name": null, 00:15:10.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.125 "is_configured": false, 00:15:10.125 "data_offset": 2048, 00:15:10.125 "data_size": 63488 00:15:10.125 } 00:15:10.125 ] 00:15:10.125 }' 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.125 21:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.385 [2024-09-29 21:46:29.356564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:10.385 [2024-09-29 21:46:29.356655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.385 [2024-09-29 21:46:29.356688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:10.385 [2024-09-29 21:46:29.356716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.385 [2024-09-29 21:46:29.357075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.385 [2024-09-29 21:46:29.357131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:10.385 [2024-09-29 21:46:29.357208] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:10.385 [2024-09-29 21:46:29.357252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:10.385 [2024-09-29 21:46:29.357363] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:10.385 [2024-09-29 21:46:29.357398] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:10.385 [2024-09-29 21:46:29.357663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:10.385 [2024-09-29 21:46:29.362888] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:10.385 [2024-09-29 21:46:29.362944] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:10.385 [2024-09-29 21:46:29.363169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.385 pt3 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.385 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.644 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.644 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.644 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.644 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.644 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.644 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.644 "name": "raid_bdev1", 00:15:10.644 "uuid": "f5ab38c8-345c-4bac-981f-3cf56c925030", 00:15:10.644 "strip_size_kb": 64, 00:15:10.644 "state": "online", 00:15:10.644 "raid_level": "raid5f", 00:15:10.644 "superblock": true, 00:15:10.644 "num_base_bdevs": 3, 00:15:10.644 "num_base_bdevs_discovered": 2, 00:15:10.644 "num_base_bdevs_operational": 2, 00:15:10.644 "base_bdevs_list": [ 00:15:10.644 { 00:15:10.644 "name": null, 00:15:10.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.644 "is_configured": false, 00:15:10.644 "data_offset": 2048, 00:15:10.644 "data_size": 63488 00:15:10.644 }, 00:15:10.644 { 00:15:10.644 "name": "pt2", 00:15:10.644 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.644 "is_configured": true, 00:15:10.644 "data_offset": 2048, 00:15:10.644 "data_size": 63488 00:15:10.644 }, 00:15:10.644 { 00:15:10.644 "name": "pt3", 00:15:10.644 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.644 "is_configured": true, 00:15:10.644 "data_offset": 2048, 00:15:10.644 "data_size": 63488 00:15:10.644 } 00:15:10.644 ] 00:15:10.644 }' 00:15:10.644 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.644 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:10.903 [2024-09-29 21:46:29.860392] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.903 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f5ab38c8-345c-4bac-981f-3cf56c925030 '!=' f5ab38c8-345c-4bac-981f-3cf56c925030 ']' 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81183 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81183 ']' 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81183 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81183 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81183' 00:15:11.162 killing process with pid 81183 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81183 00:15:11.162 [2024-09-29 21:46:29.921302] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:11.162 [2024-09-29 21:46:29.921411] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.162 [2024-09-29 21:46:29.921479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.162 [2024-09-29 21:46:29.921525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:11.162 21:46:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81183 00:15:11.422 [2024-09-29 21:46:30.201000] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.805 21:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:12.805 ************************************ 00:15:12.805 END TEST raid5f_superblock_test 00:15:12.805 ************************************ 00:15:12.805 00:15:12.805 real 0m7.923s 00:15:12.805 user 0m12.329s 00:15:12.805 sys 0m1.463s 00:15:12.805 21:46:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.805 21:46:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.805 21:46:31 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:12.805 21:46:31 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:12.805 21:46:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:12.805 21:46:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.805 21:46:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.805 ************************************ 00:15:12.805 START TEST raid5f_rebuild_test 00:15:12.805 ************************************ 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81628 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81628 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81628 ']' 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.805 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.805 [2024-09-29 21:46:31.587010] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:12.806 [2024-09-29 21:46:31.587217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81628 ] 00:15:12.806 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:12.806 Zero copy mechanism will not be used. 00:15:12.806 [2024-09-29 21:46:31.755547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.066 [2024-09-29 21:46:31.946256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.325 [2024-09-29 21:46:32.134982] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.325 [2024-09-29 21:46:32.135045] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.585 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.585 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:15:13.585 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.586 BaseBdev1_malloc 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.586 [2024-09-29 21:46:32.438595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:13.586 [2024-09-29 21:46:32.438743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.586 [2024-09-29 21:46:32.438785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:13.586 [2024-09-29 21:46:32.438823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.586 [2024-09-29 21:46:32.440840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.586 [2024-09-29 21:46:32.440919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.586 BaseBdev1 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.586 BaseBdev2_malloc 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.586 [2024-09-29 21:46:32.521106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:13.586 [2024-09-29 21:46:32.521219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.586 [2024-09-29 21:46:32.521242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:13.586 [2024-09-29 21:46:32.521264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.586 [2024-09-29 21:46:32.523190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.586 [2024-09-29 21:46:32.523229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:13.586 BaseBdev2 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.586 BaseBdev3_malloc 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.586 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.846 [2024-09-29 21:46:32.574389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:13.846 [2024-09-29 21:46:32.574498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.846 [2024-09-29 21:46:32.574534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:13.846 [2024-09-29 21:46:32.574561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.846 [2024-09-29 21:46:32.576450] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.846 [2024-09-29 21:46:32.576526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:13.846 BaseBdev3 00:15:13.846 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.847 spare_malloc 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.847 spare_delay 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.847 [2024-09-29 21:46:32.639550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:13.847 [2024-09-29 21:46:32.639655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.847 [2024-09-29 21:46:32.639686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:13.847 [2024-09-29 21:46:32.639714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.847 [2024-09-29 21:46:32.641756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.847 [2024-09-29 21:46:32.641836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:13.847 spare 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.847 [2024-09-29 21:46:32.651608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.847 [2024-09-29 21:46:32.653307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.847 [2024-09-29 21:46:32.653404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.847 [2024-09-29 21:46:32.653503] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:13.847 [2024-09-29 21:46:32.653544] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:13.847 [2024-09-29 21:46:32.653786] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:13.847 [2024-09-29 21:46:32.659273] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:13.847 [2024-09-29 21:46:32.659331] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:13.847 [2024-09-29 21:46:32.659516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.847 "name": "raid_bdev1", 00:15:13.847 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:13.847 "strip_size_kb": 64, 00:15:13.847 "state": "online", 00:15:13.847 "raid_level": "raid5f", 00:15:13.847 "superblock": false, 00:15:13.847 "num_base_bdevs": 3, 00:15:13.847 "num_base_bdevs_discovered": 3, 00:15:13.847 "num_base_bdevs_operational": 3, 00:15:13.847 "base_bdevs_list": [ 00:15:13.847 { 00:15:13.847 "name": "BaseBdev1", 00:15:13.847 "uuid": "e0e26a70-cfd7-5737-9f80-b89144cfbbc1", 00:15:13.847 "is_configured": true, 00:15:13.847 "data_offset": 0, 00:15:13.847 "data_size": 65536 00:15:13.847 }, 00:15:13.847 { 00:15:13.847 "name": "BaseBdev2", 00:15:13.847 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:13.847 "is_configured": true, 00:15:13.847 "data_offset": 0, 00:15:13.847 "data_size": 65536 00:15:13.847 }, 00:15:13.847 { 00:15:13.847 "name": "BaseBdev3", 00:15:13.847 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:13.847 "is_configured": true, 00:15:13.847 "data_offset": 0, 00:15:13.847 "data_size": 65536 00:15:13.847 } 00:15:13.847 ] 00:15:13.847 }' 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.847 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.417 [2024-09-29 21:46:33.100684] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.417 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:14.417 [2024-09-29 21:46:33.348215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:14.417 /dev/nbd0 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.676 1+0 records in 00:15:14.676 1+0 records out 00:15:14.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378989 s, 10.8 MB/s 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:14.676 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:14.935 512+0 records in 00:15:14.935 512+0 records out 00:15:14.935 67108864 bytes (67 MB, 64 MiB) copied, 0.388175 s, 173 MB/s 00:15:14.935 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:14.935 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.935 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:14.935 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.935 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:14.935 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.935 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:15.196 [2024-09-29 21:46:34.042945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.196 [2024-09-29 21:46:34.057154] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.196 "name": "raid_bdev1", 00:15:15.196 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:15.196 "strip_size_kb": 64, 00:15:15.196 "state": "online", 00:15:15.196 "raid_level": "raid5f", 00:15:15.196 "superblock": false, 00:15:15.196 "num_base_bdevs": 3, 00:15:15.196 "num_base_bdevs_discovered": 2, 00:15:15.196 "num_base_bdevs_operational": 2, 00:15:15.196 "base_bdevs_list": [ 00:15:15.196 { 00:15:15.196 "name": null, 00:15:15.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.196 "is_configured": false, 00:15:15.196 "data_offset": 0, 00:15:15.196 "data_size": 65536 00:15:15.196 }, 00:15:15.196 { 00:15:15.196 "name": "BaseBdev2", 00:15:15.196 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:15.196 "is_configured": true, 00:15:15.196 "data_offset": 0, 00:15:15.196 "data_size": 65536 00:15:15.196 }, 00:15:15.196 { 00:15:15.196 "name": "BaseBdev3", 00:15:15.196 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:15.196 "is_configured": true, 00:15:15.196 "data_offset": 0, 00:15:15.196 "data_size": 65536 00:15:15.196 } 00:15:15.196 ] 00:15:15.196 }' 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.196 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.766 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:15.766 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.766 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.766 [2024-09-29 21:46:34.476521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:15.766 [2024-09-29 21:46:34.488824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:15.766 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.766 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:15.766 [2024-09-29 21:46:34.495325] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.704 "name": "raid_bdev1", 00:15:16.704 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:16.704 "strip_size_kb": 64, 00:15:16.704 "state": "online", 00:15:16.704 "raid_level": "raid5f", 00:15:16.704 "superblock": false, 00:15:16.704 "num_base_bdevs": 3, 00:15:16.704 "num_base_bdevs_discovered": 3, 00:15:16.704 "num_base_bdevs_operational": 3, 00:15:16.704 "process": { 00:15:16.704 "type": "rebuild", 00:15:16.704 "target": "spare", 00:15:16.704 "progress": { 00:15:16.704 "blocks": 20480, 00:15:16.704 "percent": 15 00:15:16.704 } 00:15:16.704 }, 00:15:16.704 "base_bdevs_list": [ 00:15:16.704 { 00:15:16.704 "name": "spare", 00:15:16.704 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:16.704 "is_configured": true, 00:15:16.704 "data_offset": 0, 00:15:16.704 "data_size": 65536 00:15:16.704 }, 00:15:16.704 { 00:15:16.704 "name": "BaseBdev2", 00:15:16.704 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:16.704 "is_configured": true, 00:15:16.704 "data_offset": 0, 00:15:16.704 "data_size": 65536 00:15:16.704 }, 00:15:16.704 { 00:15:16.704 "name": "BaseBdev3", 00:15:16.704 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:16.704 "is_configured": true, 00:15:16.704 "data_offset": 0, 00:15:16.704 "data_size": 65536 00:15:16.704 } 00:15:16.704 ] 00:15:16.704 }' 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.704 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:16.705 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.705 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.705 [2024-09-29 21:46:35.646124] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.964 [2024-09-29 21:46:35.702235] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:16.964 [2024-09-29 21:46:35.702288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.964 [2024-09-29 21:46:35.702305] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.964 [2024-09-29 21:46:35.702312] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.964 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.965 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.965 "name": "raid_bdev1", 00:15:16.965 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:16.965 "strip_size_kb": 64, 00:15:16.965 "state": "online", 00:15:16.965 "raid_level": "raid5f", 00:15:16.965 "superblock": false, 00:15:16.965 "num_base_bdevs": 3, 00:15:16.965 "num_base_bdevs_discovered": 2, 00:15:16.965 "num_base_bdevs_operational": 2, 00:15:16.965 "base_bdevs_list": [ 00:15:16.965 { 00:15:16.965 "name": null, 00:15:16.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.965 "is_configured": false, 00:15:16.965 "data_offset": 0, 00:15:16.965 "data_size": 65536 00:15:16.965 }, 00:15:16.965 { 00:15:16.965 "name": "BaseBdev2", 00:15:16.965 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:16.965 "is_configured": true, 00:15:16.965 "data_offset": 0, 00:15:16.965 "data_size": 65536 00:15:16.965 }, 00:15:16.965 { 00:15:16.965 "name": "BaseBdev3", 00:15:16.965 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:16.965 "is_configured": true, 00:15:16.965 "data_offset": 0, 00:15:16.965 "data_size": 65536 00:15:16.965 } 00:15:16.965 ] 00:15:16.965 }' 00:15:16.965 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.965 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.534 "name": "raid_bdev1", 00:15:17.534 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:17.534 "strip_size_kb": 64, 00:15:17.534 "state": "online", 00:15:17.534 "raid_level": "raid5f", 00:15:17.534 "superblock": false, 00:15:17.534 "num_base_bdevs": 3, 00:15:17.534 "num_base_bdevs_discovered": 2, 00:15:17.534 "num_base_bdevs_operational": 2, 00:15:17.534 "base_bdevs_list": [ 00:15:17.534 { 00:15:17.534 "name": null, 00:15:17.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.534 "is_configured": false, 00:15:17.534 "data_offset": 0, 00:15:17.534 "data_size": 65536 00:15:17.534 }, 00:15:17.534 { 00:15:17.534 "name": "BaseBdev2", 00:15:17.534 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:17.534 "is_configured": true, 00:15:17.534 "data_offset": 0, 00:15:17.534 "data_size": 65536 00:15:17.534 }, 00:15:17.534 { 00:15:17.534 "name": "BaseBdev3", 00:15:17.534 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:17.534 "is_configured": true, 00:15:17.534 "data_offset": 0, 00:15:17.534 "data_size": 65536 00:15:17.534 } 00:15:17.534 ] 00:15:17.534 }' 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.534 [2024-09-29 21:46:36.375613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.534 [2024-09-29 21:46:36.388952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.534 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:17.534 [2024-09-29 21:46:36.396083] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.474 "name": "raid_bdev1", 00:15:18.474 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:18.474 "strip_size_kb": 64, 00:15:18.474 "state": "online", 00:15:18.474 "raid_level": "raid5f", 00:15:18.474 "superblock": false, 00:15:18.474 "num_base_bdevs": 3, 00:15:18.474 "num_base_bdevs_discovered": 3, 00:15:18.474 "num_base_bdevs_operational": 3, 00:15:18.474 "process": { 00:15:18.474 "type": "rebuild", 00:15:18.474 "target": "spare", 00:15:18.474 "progress": { 00:15:18.474 "blocks": 20480, 00:15:18.474 "percent": 15 00:15:18.474 } 00:15:18.474 }, 00:15:18.474 "base_bdevs_list": [ 00:15:18.474 { 00:15:18.474 "name": "spare", 00:15:18.474 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:18.474 "is_configured": true, 00:15:18.474 "data_offset": 0, 00:15:18.474 "data_size": 65536 00:15:18.474 }, 00:15:18.474 { 00:15:18.474 "name": "BaseBdev2", 00:15:18.474 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:18.474 "is_configured": true, 00:15:18.474 "data_offset": 0, 00:15:18.474 "data_size": 65536 00:15:18.474 }, 00:15:18.474 { 00:15:18.474 "name": "BaseBdev3", 00:15:18.474 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:18.474 "is_configured": true, 00:15:18.474 "data_offset": 0, 00:15:18.474 "data_size": 65536 00:15:18.474 } 00:15:18.474 ] 00:15:18.474 }' 00:15:18.474 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=553 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.734 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.734 "name": "raid_bdev1", 00:15:18.734 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:18.734 "strip_size_kb": 64, 00:15:18.734 "state": "online", 00:15:18.734 "raid_level": "raid5f", 00:15:18.734 "superblock": false, 00:15:18.734 "num_base_bdevs": 3, 00:15:18.734 "num_base_bdevs_discovered": 3, 00:15:18.734 "num_base_bdevs_operational": 3, 00:15:18.734 "process": { 00:15:18.734 "type": "rebuild", 00:15:18.734 "target": "spare", 00:15:18.734 "progress": { 00:15:18.734 "blocks": 22528, 00:15:18.734 "percent": 17 00:15:18.734 } 00:15:18.734 }, 00:15:18.734 "base_bdevs_list": [ 00:15:18.734 { 00:15:18.734 "name": "spare", 00:15:18.734 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:18.734 "is_configured": true, 00:15:18.734 "data_offset": 0, 00:15:18.734 "data_size": 65536 00:15:18.734 }, 00:15:18.734 { 00:15:18.734 "name": "BaseBdev2", 00:15:18.734 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:18.735 "is_configured": true, 00:15:18.735 "data_offset": 0, 00:15:18.735 "data_size": 65536 00:15:18.735 }, 00:15:18.735 { 00:15:18.735 "name": "BaseBdev3", 00:15:18.735 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:18.735 "is_configured": true, 00:15:18.735 "data_offset": 0, 00:15:18.735 "data_size": 65536 00:15:18.735 } 00:15:18.735 ] 00:15:18.735 }' 00:15:18.735 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.735 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:18.735 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.735 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.735 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:20.115 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:20.115 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.115 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.115 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.116 "name": "raid_bdev1", 00:15:20.116 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:20.116 "strip_size_kb": 64, 00:15:20.116 "state": "online", 00:15:20.116 "raid_level": "raid5f", 00:15:20.116 "superblock": false, 00:15:20.116 "num_base_bdevs": 3, 00:15:20.116 "num_base_bdevs_discovered": 3, 00:15:20.116 "num_base_bdevs_operational": 3, 00:15:20.116 "process": { 00:15:20.116 "type": "rebuild", 00:15:20.116 "target": "spare", 00:15:20.116 "progress": { 00:15:20.116 "blocks": 47104, 00:15:20.116 "percent": 35 00:15:20.116 } 00:15:20.116 }, 00:15:20.116 "base_bdevs_list": [ 00:15:20.116 { 00:15:20.116 "name": "spare", 00:15:20.116 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:20.116 "is_configured": true, 00:15:20.116 "data_offset": 0, 00:15:20.116 "data_size": 65536 00:15:20.116 }, 00:15:20.116 { 00:15:20.116 "name": "BaseBdev2", 00:15:20.116 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:20.116 "is_configured": true, 00:15:20.116 "data_offset": 0, 00:15:20.116 "data_size": 65536 00:15:20.116 }, 00:15:20.116 { 00:15:20.116 "name": "BaseBdev3", 00:15:20.116 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:20.116 "is_configured": true, 00:15:20.116 "data_offset": 0, 00:15:20.116 "data_size": 65536 00:15:20.116 } 00:15:20.116 ] 00:15:20.116 }' 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.116 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.056 "name": "raid_bdev1", 00:15:21.056 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:21.056 "strip_size_kb": 64, 00:15:21.056 "state": "online", 00:15:21.056 "raid_level": "raid5f", 00:15:21.056 "superblock": false, 00:15:21.056 "num_base_bdevs": 3, 00:15:21.056 "num_base_bdevs_discovered": 3, 00:15:21.056 "num_base_bdevs_operational": 3, 00:15:21.056 "process": { 00:15:21.056 "type": "rebuild", 00:15:21.056 "target": "spare", 00:15:21.056 "progress": { 00:15:21.056 "blocks": 69632, 00:15:21.056 "percent": 53 00:15:21.056 } 00:15:21.056 }, 00:15:21.056 "base_bdevs_list": [ 00:15:21.056 { 00:15:21.056 "name": "spare", 00:15:21.056 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:21.056 "is_configured": true, 00:15:21.056 "data_offset": 0, 00:15:21.056 "data_size": 65536 00:15:21.056 }, 00:15:21.056 { 00:15:21.056 "name": "BaseBdev2", 00:15:21.056 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:21.056 "is_configured": true, 00:15:21.056 "data_offset": 0, 00:15:21.056 "data_size": 65536 00:15:21.056 }, 00:15:21.056 { 00:15:21.056 "name": "BaseBdev3", 00:15:21.056 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:21.056 "is_configured": true, 00:15:21.056 "data_offset": 0, 00:15:21.056 "data_size": 65536 00:15:21.056 } 00:15:21.056 ] 00:15:21.056 }' 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.056 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.438 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.438 21:46:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.438 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.438 "name": "raid_bdev1", 00:15:22.438 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:22.438 "strip_size_kb": 64, 00:15:22.438 "state": "online", 00:15:22.438 "raid_level": "raid5f", 00:15:22.438 "superblock": false, 00:15:22.438 "num_base_bdevs": 3, 00:15:22.438 "num_base_bdevs_discovered": 3, 00:15:22.438 "num_base_bdevs_operational": 3, 00:15:22.438 "process": { 00:15:22.438 "type": "rebuild", 00:15:22.438 "target": "spare", 00:15:22.438 "progress": { 00:15:22.438 "blocks": 92160, 00:15:22.438 "percent": 70 00:15:22.438 } 00:15:22.438 }, 00:15:22.438 "base_bdevs_list": [ 00:15:22.438 { 00:15:22.438 "name": "spare", 00:15:22.438 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:22.438 "is_configured": true, 00:15:22.438 "data_offset": 0, 00:15:22.438 "data_size": 65536 00:15:22.438 }, 00:15:22.438 { 00:15:22.438 "name": "BaseBdev2", 00:15:22.438 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:22.438 "is_configured": true, 00:15:22.438 "data_offset": 0, 00:15:22.438 "data_size": 65536 00:15:22.438 }, 00:15:22.438 { 00:15:22.438 "name": "BaseBdev3", 00:15:22.438 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:22.438 "is_configured": true, 00:15:22.438 "data_offset": 0, 00:15:22.438 "data_size": 65536 00:15:22.438 } 00:15:22.438 ] 00:15:22.438 }' 00:15:22.438 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.438 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.438 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.438 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.438 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.377 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.378 "name": "raid_bdev1", 00:15:23.378 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:23.378 "strip_size_kb": 64, 00:15:23.378 "state": "online", 00:15:23.378 "raid_level": "raid5f", 00:15:23.378 "superblock": false, 00:15:23.378 "num_base_bdevs": 3, 00:15:23.378 "num_base_bdevs_discovered": 3, 00:15:23.378 "num_base_bdevs_operational": 3, 00:15:23.378 "process": { 00:15:23.378 "type": "rebuild", 00:15:23.378 "target": "spare", 00:15:23.378 "progress": { 00:15:23.378 "blocks": 116736, 00:15:23.378 "percent": 89 00:15:23.378 } 00:15:23.378 }, 00:15:23.378 "base_bdevs_list": [ 00:15:23.378 { 00:15:23.378 "name": "spare", 00:15:23.378 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:23.378 "is_configured": true, 00:15:23.378 "data_offset": 0, 00:15:23.378 "data_size": 65536 00:15:23.378 }, 00:15:23.378 { 00:15:23.378 "name": "BaseBdev2", 00:15:23.378 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:23.378 "is_configured": true, 00:15:23.378 "data_offset": 0, 00:15:23.378 "data_size": 65536 00:15:23.378 }, 00:15:23.378 { 00:15:23.378 "name": "BaseBdev3", 00:15:23.378 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:23.378 "is_configured": true, 00:15:23.378 "data_offset": 0, 00:15:23.378 "data_size": 65536 00:15:23.378 } 00:15:23.378 ] 00:15:23.378 }' 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.378 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.947 [2024-09-29 21:46:42.829914] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:23.947 [2024-09-29 21:46:42.829991] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:23.947 [2024-09-29 21:46:42.830025] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.557 "name": "raid_bdev1", 00:15:24.557 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:24.557 "strip_size_kb": 64, 00:15:24.557 "state": "online", 00:15:24.557 "raid_level": "raid5f", 00:15:24.557 "superblock": false, 00:15:24.557 "num_base_bdevs": 3, 00:15:24.557 "num_base_bdevs_discovered": 3, 00:15:24.557 "num_base_bdevs_operational": 3, 00:15:24.557 "base_bdevs_list": [ 00:15:24.557 { 00:15:24.557 "name": "spare", 00:15:24.557 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:24.557 "is_configured": true, 00:15:24.557 "data_offset": 0, 00:15:24.557 "data_size": 65536 00:15:24.557 }, 00:15:24.557 { 00:15:24.557 "name": "BaseBdev2", 00:15:24.557 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:24.557 "is_configured": true, 00:15:24.557 "data_offset": 0, 00:15:24.557 "data_size": 65536 00:15:24.557 }, 00:15:24.557 { 00:15:24.557 "name": "BaseBdev3", 00:15:24.557 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:24.557 "is_configured": true, 00:15:24.557 "data_offset": 0, 00:15:24.557 "data_size": 65536 00:15:24.557 } 00:15:24.557 ] 00:15:24.557 }' 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.557 "name": "raid_bdev1", 00:15:24.557 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:24.557 "strip_size_kb": 64, 00:15:24.557 "state": "online", 00:15:24.557 "raid_level": "raid5f", 00:15:24.557 "superblock": false, 00:15:24.557 "num_base_bdevs": 3, 00:15:24.557 "num_base_bdevs_discovered": 3, 00:15:24.557 "num_base_bdevs_operational": 3, 00:15:24.557 "base_bdevs_list": [ 00:15:24.557 { 00:15:24.557 "name": "spare", 00:15:24.557 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:24.557 "is_configured": true, 00:15:24.557 "data_offset": 0, 00:15:24.557 "data_size": 65536 00:15:24.557 }, 00:15:24.557 { 00:15:24.557 "name": "BaseBdev2", 00:15:24.557 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:24.557 "is_configured": true, 00:15:24.557 "data_offset": 0, 00:15:24.557 "data_size": 65536 00:15:24.557 }, 00:15:24.557 { 00:15:24.557 "name": "BaseBdev3", 00:15:24.557 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:24.557 "is_configured": true, 00:15:24.557 "data_offset": 0, 00:15:24.557 "data_size": 65536 00:15:24.557 } 00:15:24.557 ] 00:15:24.557 }' 00:15:24.557 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.832 "name": "raid_bdev1", 00:15:24.832 "uuid": "a39d6c73-0424-4d98-92ef-8cedc2d5d8c6", 00:15:24.832 "strip_size_kb": 64, 00:15:24.832 "state": "online", 00:15:24.832 "raid_level": "raid5f", 00:15:24.832 "superblock": false, 00:15:24.832 "num_base_bdevs": 3, 00:15:24.832 "num_base_bdevs_discovered": 3, 00:15:24.832 "num_base_bdevs_operational": 3, 00:15:24.832 "base_bdevs_list": [ 00:15:24.832 { 00:15:24.832 "name": "spare", 00:15:24.832 "uuid": "43a26a02-a53c-55eb-96d7-a201e925ea1e", 00:15:24.832 "is_configured": true, 00:15:24.832 "data_offset": 0, 00:15:24.832 "data_size": 65536 00:15:24.832 }, 00:15:24.832 { 00:15:24.832 "name": "BaseBdev2", 00:15:24.832 "uuid": "0b4a63e9-0511-5adc-a17a-763edfe12eec", 00:15:24.832 "is_configured": true, 00:15:24.832 "data_offset": 0, 00:15:24.832 "data_size": 65536 00:15:24.832 }, 00:15:24.832 { 00:15:24.832 "name": "BaseBdev3", 00:15:24.832 "uuid": "fe225acb-f6b0-59ec-a4ea-cf6dbc09f3f7", 00:15:24.832 "is_configured": true, 00:15:24.832 "data_offset": 0, 00:15:24.832 "data_size": 65536 00:15:24.832 } 00:15:24.832 ] 00:15:24.832 }' 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.832 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.092 [2024-09-29 21:46:44.010165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.092 [2024-09-29 21:46:44.010194] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.092 [2024-09-29 21:46:44.010258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.092 [2024-09-29 21:46:44.010329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.092 [2024-09-29 21:46:44.010348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:25.092 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:25.352 /dev/nbd0 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.352 1+0 records in 00:15:25.352 1+0 records out 00:15:25.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449111 s, 9.1 MB/s 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:25.352 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:25.612 /dev/nbd1 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.612 1+0 records in 00:15:25.612 1+0 records out 00:15:25.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506197 s, 8.1 MB/s 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:25.612 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:25.872 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:25.872 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.872 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:25.872 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.872 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:25.872 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.872 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.133 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81628 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81628 ']' 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81628 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81628 00:15:26.393 killing process with pid 81628 00:15:26.393 Received shutdown signal, test time was about 60.000000 seconds 00:15:26.393 00:15:26.393 Latency(us) 00:15:26.393 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.393 =================================================================================================================== 00:15:26.393 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81628' 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81628 00:15:26.393 [2024-09-29 21:46:45.207539] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.393 21:46:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81628 00:15:26.653 [2024-09-29 21:46:45.575817] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:28.036 00:15:28.036 real 0m15.272s 00:15:28.036 user 0m18.625s 00:15:28.036 sys 0m2.187s 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.036 ************************************ 00:15:28.036 END TEST raid5f_rebuild_test 00:15:28.036 ************************************ 00:15:28.036 21:46:46 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:28.036 21:46:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:28.036 21:46:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:28.036 21:46:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:28.036 ************************************ 00:15:28.036 START TEST raid5f_rebuild_test_sb 00:15:28.036 ************************************ 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82057 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82057 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82057 ']' 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.036 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.036 [2024-09-29 21:46:46.936338] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:28.036 [2024-09-29 21:46:46.936881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82057 ] 00:15:28.036 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:28.036 Zero copy mechanism will not be used. 00:15:28.296 [2024-09-29 21:46:47.092823] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.557 [2024-09-29 21:46:47.281288] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.557 [2024-09-29 21:46:47.463459] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.557 [2024-09-29 21:46:47.463498] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.817 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.817 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:28.818 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:28.818 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:28.818 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.818 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.818 BaseBdev1_malloc 00:15:28.818 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.818 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:28.818 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.818 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.818 [2024-09-29 21:46:47.799658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:28.818 [2024-09-29 21:46:47.799718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.818 [2024-09-29 21:46:47.799737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:28.818 [2024-09-29 21:46:47.799750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.078 [2024-09-29 21:46:47.801738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.078 [2024-09-29 21:46:47.801774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:29.078 BaseBdev1 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.078 BaseBdev2_malloc 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.078 [2024-09-29 21:46:47.883621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:29.078 [2024-09-29 21:46:47.883674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.078 [2024-09-29 21:46:47.883690] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:29.078 [2024-09-29 21:46:47.883702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.078 [2024-09-29 21:46:47.885647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.078 [2024-09-29 21:46:47.885683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:29.078 BaseBdev2 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.078 BaseBdev3_malloc 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.078 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.078 [2024-09-29 21:46:47.934954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:29.079 [2024-09-29 21:46:47.935001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.079 [2024-09-29 21:46:47.935020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:29.079 [2024-09-29 21:46:47.935040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.079 [2024-09-29 21:46:47.936981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.079 [2024-09-29 21:46:47.937018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:29.079 BaseBdev3 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.079 spare_malloc 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.079 spare_delay 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.079 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.079 [2024-09-29 21:46:48.000571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:29.079 [2024-09-29 21:46:48.000617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.079 [2024-09-29 21:46:48.000633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:29.079 [2024-09-29 21:46:48.000643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.079 [2024-09-29 21:46:48.002609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.079 [2024-09-29 21:46:48.002645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:29.079 spare 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.079 [2024-09-29 21:46:48.012622] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.079 [2024-09-29 21:46:48.014294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.079 [2024-09-29 21:46:48.014356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.079 [2024-09-29 21:46:48.014523] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:29.079 [2024-09-29 21:46:48.014535] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:29.079 [2024-09-29 21:46:48.014748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:29.079 [2024-09-29 21:46:48.020095] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:29.079 [2024-09-29 21:46:48.020120] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:29.079 [2024-09-29 21:46:48.020296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.079 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.338 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.338 "name": "raid_bdev1", 00:15:29.338 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:29.338 "strip_size_kb": 64, 00:15:29.338 "state": "online", 00:15:29.338 "raid_level": "raid5f", 00:15:29.338 "superblock": true, 00:15:29.338 "num_base_bdevs": 3, 00:15:29.338 "num_base_bdevs_discovered": 3, 00:15:29.338 "num_base_bdevs_operational": 3, 00:15:29.338 "base_bdevs_list": [ 00:15:29.338 { 00:15:29.338 "name": "BaseBdev1", 00:15:29.338 "uuid": "b691c1e3-32dc-50a2-9960-e168639e5cba", 00:15:29.338 "is_configured": true, 00:15:29.338 "data_offset": 2048, 00:15:29.338 "data_size": 63488 00:15:29.338 }, 00:15:29.338 { 00:15:29.338 "name": "BaseBdev2", 00:15:29.338 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:29.338 "is_configured": true, 00:15:29.338 "data_offset": 2048, 00:15:29.338 "data_size": 63488 00:15:29.338 }, 00:15:29.338 { 00:15:29.338 "name": "BaseBdev3", 00:15:29.338 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:29.338 "is_configured": true, 00:15:29.338 "data_offset": 2048, 00:15:29.338 "data_size": 63488 00:15:29.338 } 00:15:29.338 ] 00:15:29.338 }' 00:15:29.338 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.338 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.598 [2024-09-29 21:46:48.477651] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:29.598 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:29.858 [2024-09-29 21:46:48.741126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:29.858 /dev/nbd0 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.858 1+0 records in 00:15:29.858 1+0 records out 00:15:29.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426814 s, 9.6 MB/s 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:29.858 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:30.426 496+0 records in 00:15:30.426 496+0 records out 00:15:30.426 65011712 bytes (65 MB, 62 MiB) copied, 0.555251 s, 117 MB/s 00:15:30.426 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:30.426 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:30.426 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:30.426 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:30.426 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:30.426 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.426 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:30.686 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:30.687 [2024-09-29 21:46:49.579055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.687 [2024-09-29 21:46:49.593720] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.687 "name": "raid_bdev1", 00:15:30.687 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:30.687 "strip_size_kb": 64, 00:15:30.687 "state": "online", 00:15:30.687 "raid_level": "raid5f", 00:15:30.687 "superblock": true, 00:15:30.687 "num_base_bdevs": 3, 00:15:30.687 "num_base_bdevs_discovered": 2, 00:15:30.687 "num_base_bdevs_operational": 2, 00:15:30.687 "base_bdevs_list": [ 00:15:30.687 { 00:15:30.687 "name": null, 00:15:30.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.687 "is_configured": false, 00:15:30.687 "data_offset": 0, 00:15:30.687 "data_size": 63488 00:15:30.687 }, 00:15:30.687 { 00:15:30.687 "name": "BaseBdev2", 00:15:30.687 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:30.687 "is_configured": true, 00:15:30.687 "data_offset": 2048, 00:15:30.687 "data_size": 63488 00:15:30.687 }, 00:15:30.687 { 00:15:30.687 "name": "BaseBdev3", 00:15:30.687 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:30.687 "is_configured": true, 00:15:30.687 "data_offset": 2048, 00:15:30.687 "data_size": 63488 00:15:30.687 } 00:15:30.687 ] 00:15:30.687 }' 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.687 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.256 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.256 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.256 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.256 [2024-09-29 21:46:50.088860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.256 [2024-09-29 21:46:50.103129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:31.256 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.256 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:31.256 [2024-09-29 21:46:50.110152] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.196 "name": "raid_bdev1", 00:15:32.196 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:32.196 "strip_size_kb": 64, 00:15:32.196 "state": "online", 00:15:32.196 "raid_level": "raid5f", 00:15:32.196 "superblock": true, 00:15:32.196 "num_base_bdevs": 3, 00:15:32.196 "num_base_bdevs_discovered": 3, 00:15:32.196 "num_base_bdevs_operational": 3, 00:15:32.196 "process": { 00:15:32.196 "type": "rebuild", 00:15:32.196 "target": "spare", 00:15:32.196 "progress": { 00:15:32.196 "blocks": 20480, 00:15:32.196 "percent": 16 00:15:32.196 } 00:15:32.196 }, 00:15:32.196 "base_bdevs_list": [ 00:15:32.196 { 00:15:32.196 "name": "spare", 00:15:32.196 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:32.196 "is_configured": true, 00:15:32.196 "data_offset": 2048, 00:15:32.196 "data_size": 63488 00:15:32.196 }, 00:15:32.196 { 00:15:32.196 "name": "BaseBdev2", 00:15:32.196 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:32.196 "is_configured": true, 00:15:32.196 "data_offset": 2048, 00:15:32.196 "data_size": 63488 00:15:32.196 }, 00:15:32.196 { 00:15:32.196 "name": "BaseBdev3", 00:15:32.196 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:32.196 "is_configured": true, 00:15:32.196 "data_offset": 2048, 00:15:32.196 "data_size": 63488 00:15:32.196 } 00:15:32.196 ] 00:15:32.196 }' 00:15:32.196 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.456 [2024-09-29 21:46:51.241089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.456 [2024-09-29 21:46:51.317173] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:32.456 [2024-09-29 21:46:51.317220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.456 [2024-09-29 21:46:51.317237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.456 [2024-09-29 21:46:51.317244] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.456 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.457 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.457 "name": "raid_bdev1", 00:15:32.457 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:32.457 "strip_size_kb": 64, 00:15:32.457 "state": "online", 00:15:32.457 "raid_level": "raid5f", 00:15:32.457 "superblock": true, 00:15:32.457 "num_base_bdevs": 3, 00:15:32.457 "num_base_bdevs_discovered": 2, 00:15:32.457 "num_base_bdevs_operational": 2, 00:15:32.457 "base_bdevs_list": [ 00:15:32.457 { 00:15:32.457 "name": null, 00:15:32.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.457 "is_configured": false, 00:15:32.457 "data_offset": 0, 00:15:32.457 "data_size": 63488 00:15:32.457 }, 00:15:32.457 { 00:15:32.457 "name": "BaseBdev2", 00:15:32.457 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:32.457 "is_configured": true, 00:15:32.457 "data_offset": 2048, 00:15:32.457 "data_size": 63488 00:15:32.457 }, 00:15:32.457 { 00:15:32.457 "name": "BaseBdev3", 00:15:32.457 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:32.457 "is_configured": true, 00:15:32.457 "data_offset": 2048, 00:15:32.457 "data_size": 63488 00:15:32.457 } 00:15:32.457 ] 00:15:32.457 }' 00:15:32.457 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.457 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.025 "name": "raid_bdev1", 00:15:33.025 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:33.025 "strip_size_kb": 64, 00:15:33.025 "state": "online", 00:15:33.025 "raid_level": "raid5f", 00:15:33.025 "superblock": true, 00:15:33.025 "num_base_bdevs": 3, 00:15:33.025 "num_base_bdevs_discovered": 2, 00:15:33.025 "num_base_bdevs_operational": 2, 00:15:33.025 "base_bdevs_list": [ 00:15:33.025 { 00:15:33.025 "name": null, 00:15:33.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.025 "is_configured": false, 00:15:33.025 "data_offset": 0, 00:15:33.025 "data_size": 63488 00:15:33.025 }, 00:15:33.025 { 00:15:33.025 "name": "BaseBdev2", 00:15:33.025 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:33.025 "is_configured": true, 00:15:33.025 "data_offset": 2048, 00:15:33.025 "data_size": 63488 00:15:33.025 }, 00:15:33.025 { 00:15:33.025 "name": "BaseBdev3", 00:15:33.025 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:33.025 "is_configured": true, 00:15:33.025 "data_offset": 2048, 00:15:33.025 "data_size": 63488 00:15:33.025 } 00:15:33.025 ] 00:15:33.025 }' 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.025 [2024-09-29 21:46:51.918864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.025 [2024-09-29 21:46:51.932914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.025 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:33.025 [2024-09-29 21:46:51.939815] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:33.965 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.965 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.965 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.965 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.965 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.965 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.965 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.965 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.965 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.226 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.226 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.226 "name": "raid_bdev1", 00:15:34.226 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:34.226 "strip_size_kb": 64, 00:15:34.226 "state": "online", 00:15:34.226 "raid_level": "raid5f", 00:15:34.226 "superblock": true, 00:15:34.226 "num_base_bdevs": 3, 00:15:34.226 "num_base_bdevs_discovered": 3, 00:15:34.226 "num_base_bdevs_operational": 3, 00:15:34.226 "process": { 00:15:34.226 "type": "rebuild", 00:15:34.226 "target": "spare", 00:15:34.226 "progress": { 00:15:34.226 "blocks": 20480, 00:15:34.226 "percent": 16 00:15:34.226 } 00:15:34.226 }, 00:15:34.226 "base_bdevs_list": [ 00:15:34.226 { 00:15:34.226 "name": "spare", 00:15:34.226 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:34.226 "is_configured": true, 00:15:34.226 "data_offset": 2048, 00:15:34.226 "data_size": 63488 00:15:34.226 }, 00:15:34.226 { 00:15:34.226 "name": "BaseBdev2", 00:15:34.226 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:34.226 "is_configured": true, 00:15:34.226 "data_offset": 2048, 00:15:34.226 "data_size": 63488 00:15:34.226 }, 00:15:34.226 { 00:15:34.226 "name": "BaseBdev3", 00:15:34.226 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:34.226 "is_configured": true, 00:15:34.226 "data_offset": 2048, 00:15:34.226 "data_size": 63488 00:15:34.226 } 00:15:34.226 ] 00:15:34.226 }' 00:15:34.226 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:34.226 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=569 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.226 "name": "raid_bdev1", 00:15:34.226 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:34.226 "strip_size_kb": 64, 00:15:34.226 "state": "online", 00:15:34.226 "raid_level": "raid5f", 00:15:34.226 "superblock": true, 00:15:34.226 "num_base_bdevs": 3, 00:15:34.226 "num_base_bdevs_discovered": 3, 00:15:34.226 "num_base_bdevs_operational": 3, 00:15:34.226 "process": { 00:15:34.226 "type": "rebuild", 00:15:34.226 "target": "spare", 00:15:34.226 "progress": { 00:15:34.226 "blocks": 22528, 00:15:34.226 "percent": 17 00:15:34.226 } 00:15:34.226 }, 00:15:34.226 "base_bdevs_list": [ 00:15:34.226 { 00:15:34.226 "name": "spare", 00:15:34.226 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:34.226 "is_configured": true, 00:15:34.226 "data_offset": 2048, 00:15:34.226 "data_size": 63488 00:15:34.226 }, 00:15:34.226 { 00:15:34.226 "name": "BaseBdev2", 00:15:34.226 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:34.226 "is_configured": true, 00:15:34.226 "data_offset": 2048, 00:15:34.226 "data_size": 63488 00:15:34.226 }, 00:15:34.226 { 00:15:34.226 "name": "BaseBdev3", 00:15:34.226 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:34.226 "is_configured": true, 00:15:34.226 "data_offset": 2048, 00:15:34.226 "data_size": 63488 00:15:34.226 } 00:15:34.226 ] 00:15:34.226 }' 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.226 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.486 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.486 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.425 "name": "raid_bdev1", 00:15:35.425 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:35.425 "strip_size_kb": 64, 00:15:35.425 "state": "online", 00:15:35.425 "raid_level": "raid5f", 00:15:35.425 "superblock": true, 00:15:35.425 "num_base_bdevs": 3, 00:15:35.425 "num_base_bdevs_discovered": 3, 00:15:35.425 "num_base_bdevs_operational": 3, 00:15:35.425 "process": { 00:15:35.425 "type": "rebuild", 00:15:35.425 "target": "spare", 00:15:35.425 "progress": { 00:15:35.425 "blocks": 47104, 00:15:35.425 "percent": 37 00:15:35.425 } 00:15:35.425 }, 00:15:35.425 "base_bdevs_list": [ 00:15:35.425 { 00:15:35.425 "name": "spare", 00:15:35.425 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:35.425 "is_configured": true, 00:15:35.425 "data_offset": 2048, 00:15:35.425 "data_size": 63488 00:15:35.425 }, 00:15:35.425 { 00:15:35.425 "name": "BaseBdev2", 00:15:35.425 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:35.425 "is_configured": true, 00:15:35.425 "data_offset": 2048, 00:15:35.425 "data_size": 63488 00:15:35.425 }, 00:15:35.425 { 00:15:35.425 "name": "BaseBdev3", 00:15:35.425 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:35.425 "is_configured": true, 00:15:35.425 "data_offset": 2048, 00:15:35.425 "data_size": 63488 00:15:35.425 } 00:15:35.425 ] 00:15:35.425 }' 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.425 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.805 "name": "raid_bdev1", 00:15:36.805 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:36.805 "strip_size_kb": 64, 00:15:36.805 "state": "online", 00:15:36.805 "raid_level": "raid5f", 00:15:36.805 "superblock": true, 00:15:36.805 "num_base_bdevs": 3, 00:15:36.805 "num_base_bdevs_discovered": 3, 00:15:36.805 "num_base_bdevs_operational": 3, 00:15:36.805 "process": { 00:15:36.805 "type": "rebuild", 00:15:36.805 "target": "spare", 00:15:36.805 "progress": { 00:15:36.805 "blocks": 69632, 00:15:36.805 "percent": 54 00:15:36.805 } 00:15:36.805 }, 00:15:36.805 "base_bdevs_list": [ 00:15:36.805 { 00:15:36.805 "name": "spare", 00:15:36.805 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:36.805 "is_configured": true, 00:15:36.805 "data_offset": 2048, 00:15:36.805 "data_size": 63488 00:15:36.805 }, 00:15:36.805 { 00:15:36.805 "name": "BaseBdev2", 00:15:36.805 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:36.805 "is_configured": true, 00:15:36.805 "data_offset": 2048, 00:15:36.805 "data_size": 63488 00:15:36.805 }, 00:15:36.805 { 00:15:36.805 "name": "BaseBdev3", 00:15:36.805 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:36.805 "is_configured": true, 00:15:36.805 "data_offset": 2048, 00:15:36.805 "data_size": 63488 00:15:36.805 } 00:15:36.805 ] 00:15:36.805 }' 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:36.805 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.745 "name": "raid_bdev1", 00:15:37.745 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:37.745 "strip_size_kb": 64, 00:15:37.745 "state": "online", 00:15:37.745 "raid_level": "raid5f", 00:15:37.745 "superblock": true, 00:15:37.745 "num_base_bdevs": 3, 00:15:37.745 "num_base_bdevs_discovered": 3, 00:15:37.745 "num_base_bdevs_operational": 3, 00:15:37.745 "process": { 00:15:37.745 "type": "rebuild", 00:15:37.745 "target": "spare", 00:15:37.745 "progress": { 00:15:37.745 "blocks": 94208, 00:15:37.745 "percent": 74 00:15:37.745 } 00:15:37.745 }, 00:15:37.745 "base_bdevs_list": [ 00:15:37.745 { 00:15:37.745 "name": "spare", 00:15:37.745 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:37.745 "is_configured": true, 00:15:37.745 "data_offset": 2048, 00:15:37.745 "data_size": 63488 00:15:37.745 }, 00:15:37.745 { 00:15:37.745 "name": "BaseBdev2", 00:15:37.745 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:37.745 "is_configured": true, 00:15:37.745 "data_offset": 2048, 00:15:37.745 "data_size": 63488 00:15:37.745 }, 00:15:37.745 { 00:15:37.745 "name": "BaseBdev3", 00:15:37.745 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:37.745 "is_configured": true, 00:15:37.745 "data_offset": 2048, 00:15:37.745 "data_size": 63488 00:15:37.745 } 00:15:37.745 ] 00:15:37.745 }' 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.745 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.126 "name": "raid_bdev1", 00:15:39.126 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:39.126 "strip_size_kb": 64, 00:15:39.126 "state": "online", 00:15:39.126 "raid_level": "raid5f", 00:15:39.126 "superblock": true, 00:15:39.126 "num_base_bdevs": 3, 00:15:39.126 "num_base_bdevs_discovered": 3, 00:15:39.126 "num_base_bdevs_operational": 3, 00:15:39.126 "process": { 00:15:39.126 "type": "rebuild", 00:15:39.126 "target": "spare", 00:15:39.126 "progress": { 00:15:39.126 "blocks": 116736, 00:15:39.126 "percent": 91 00:15:39.126 } 00:15:39.126 }, 00:15:39.126 "base_bdevs_list": [ 00:15:39.126 { 00:15:39.126 "name": "spare", 00:15:39.126 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:39.126 "is_configured": true, 00:15:39.126 "data_offset": 2048, 00:15:39.126 "data_size": 63488 00:15:39.126 }, 00:15:39.126 { 00:15:39.126 "name": "BaseBdev2", 00:15:39.126 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:39.126 "is_configured": true, 00:15:39.126 "data_offset": 2048, 00:15:39.126 "data_size": 63488 00:15:39.126 }, 00:15:39.126 { 00:15:39.126 "name": "BaseBdev3", 00:15:39.126 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:39.126 "is_configured": true, 00:15:39.126 "data_offset": 2048, 00:15:39.126 "data_size": 63488 00:15:39.126 } 00:15:39.126 ] 00:15:39.126 }' 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.126 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:39.386 [2024-09-29 21:46:58.172836] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:39.386 [2024-09-29 21:46:58.172909] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:39.386 [2024-09-29 21:46:58.173003] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.955 "name": "raid_bdev1", 00:15:39.955 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:39.955 "strip_size_kb": 64, 00:15:39.955 "state": "online", 00:15:39.955 "raid_level": "raid5f", 00:15:39.955 "superblock": true, 00:15:39.955 "num_base_bdevs": 3, 00:15:39.955 "num_base_bdevs_discovered": 3, 00:15:39.955 "num_base_bdevs_operational": 3, 00:15:39.955 "base_bdevs_list": [ 00:15:39.955 { 00:15:39.955 "name": "spare", 00:15:39.955 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:39.955 "is_configured": true, 00:15:39.955 "data_offset": 2048, 00:15:39.955 "data_size": 63488 00:15:39.955 }, 00:15:39.955 { 00:15:39.955 "name": "BaseBdev2", 00:15:39.955 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:39.955 "is_configured": true, 00:15:39.955 "data_offset": 2048, 00:15:39.955 "data_size": 63488 00:15:39.955 }, 00:15:39.955 { 00:15:39.955 "name": "BaseBdev3", 00:15:39.955 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:39.955 "is_configured": true, 00:15:39.955 "data_offset": 2048, 00:15:39.955 "data_size": 63488 00:15:39.955 } 00:15:39.955 ] 00:15:39.955 }' 00:15:39.955 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.220 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:40.220 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.220 "name": "raid_bdev1", 00:15:40.220 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:40.220 "strip_size_kb": 64, 00:15:40.220 "state": "online", 00:15:40.220 "raid_level": "raid5f", 00:15:40.220 "superblock": true, 00:15:40.220 "num_base_bdevs": 3, 00:15:40.220 "num_base_bdevs_discovered": 3, 00:15:40.220 "num_base_bdevs_operational": 3, 00:15:40.220 "base_bdevs_list": [ 00:15:40.220 { 00:15:40.220 "name": "spare", 00:15:40.220 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:40.220 "is_configured": true, 00:15:40.220 "data_offset": 2048, 00:15:40.220 "data_size": 63488 00:15:40.220 }, 00:15:40.220 { 00:15:40.220 "name": "BaseBdev2", 00:15:40.220 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:40.220 "is_configured": true, 00:15:40.220 "data_offset": 2048, 00:15:40.220 "data_size": 63488 00:15:40.220 }, 00:15:40.220 { 00:15:40.220 "name": "BaseBdev3", 00:15:40.220 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:40.220 "is_configured": true, 00:15:40.220 "data_offset": 2048, 00:15:40.220 "data_size": 63488 00:15:40.220 } 00:15:40.220 ] 00:15:40.220 }' 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.220 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.481 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.481 "name": "raid_bdev1", 00:15:40.481 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:40.481 "strip_size_kb": 64, 00:15:40.481 "state": "online", 00:15:40.481 "raid_level": "raid5f", 00:15:40.481 "superblock": true, 00:15:40.481 "num_base_bdevs": 3, 00:15:40.481 "num_base_bdevs_discovered": 3, 00:15:40.481 "num_base_bdevs_operational": 3, 00:15:40.481 "base_bdevs_list": [ 00:15:40.481 { 00:15:40.481 "name": "spare", 00:15:40.481 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:40.481 "is_configured": true, 00:15:40.481 "data_offset": 2048, 00:15:40.481 "data_size": 63488 00:15:40.481 }, 00:15:40.481 { 00:15:40.481 "name": "BaseBdev2", 00:15:40.481 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:40.481 "is_configured": true, 00:15:40.481 "data_offset": 2048, 00:15:40.481 "data_size": 63488 00:15:40.481 }, 00:15:40.481 { 00:15:40.481 "name": "BaseBdev3", 00:15:40.481 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:40.481 "is_configured": true, 00:15:40.481 "data_offset": 2048, 00:15:40.481 "data_size": 63488 00:15:40.481 } 00:15:40.481 ] 00:15:40.481 }' 00:15:40.481 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.481 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.740 [2024-09-29 21:46:59.669266] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.740 [2024-09-29 21:46:59.669295] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.740 [2024-09-29 21:46:59.669370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.740 [2024-09-29 21:46:59.669449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.740 [2024-09-29 21:46:59.669468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:40.740 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:41.000 /dev/nbd0 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.000 1+0 records in 00:15:41.000 1+0 records out 00:15:41.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403951 s, 10.1 MB/s 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:41.000 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:41.260 /dev/nbd1 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:41.260 1+0 records in 00:15:41.260 1+0 records out 00:15:41.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414966 s, 9.9 MB/s 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:41.260 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:41.519 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:41.519 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:41.519 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:41.519 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.519 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:41.519 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.519 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:41.778 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:42.038 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:42.038 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:42.038 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.038 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.039 [2024-09-29 21:47:00.787563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:42.039 [2024-09-29 21:47:00.787615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.039 [2024-09-29 21:47:00.787634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:42.039 [2024-09-29 21:47:00.787644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.039 [2024-09-29 21:47:00.790016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.039 [2024-09-29 21:47:00.790069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:42.039 [2024-09-29 21:47:00.790148] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:42.039 [2024-09-29 21:47:00.790216] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.039 [2024-09-29 21:47:00.790350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:42.039 [2024-09-29 21:47:00.790445] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.039 spare 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.039 [2024-09-29 21:47:00.890332] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:42.039 [2024-09-29 21:47:00.890362] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:42.039 [2024-09-29 21:47:00.890606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:42.039 [2024-09-29 21:47:00.895793] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:42.039 [2024-09-29 21:47:00.895816] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:42.039 [2024-09-29 21:47:00.895976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.039 "name": "raid_bdev1", 00:15:42.039 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:42.039 "strip_size_kb": 64, 00:15:42.039 "state": "online", 00:15:42.039 "raid_level": "raid5f", 00:15:42.039 "superblock": true, 00:15:42.039 "num_base_bdevs": 3, 00:15:42.039 "num_base_bdevs_discovered": 3, 00:15:42.039 "num_base_bdevs_operational": 3, 00:15:42.039 "base_bdevs_list": [ 00:15:42.039 { 00:15:42.039 "name": "spare", 00:15:42.039 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:42.039 "is_configured": true, 00:15:42.039 "data_offset": 2048, 00:15:42.039 "data_size": 63488 00:15:42.039 }, 00:15:42.039 { 00:15:42.039 "name": "BaseBdev2", 00:15:42.039 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:42.039 "is_configured": true, 00:15:42.039 "data_offset": 2048, 00:15:42.039 "data_size": 63488 00:15:42.039 }, 00:15:42.039 { 00:15:42.039 "name": "BaseBdev3", 00:15:42.039 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:42.039 "is_configured": true, 00:15:42.039 "data_offset": 2048, 00:15:42.039 "data_size": 63488 00:15:42.039 } 00:15:42.039 ] 00:15:42.039 }' 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.039 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.609 "name": "raid_bdev1", 00:15:42.609 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:42.609 "strip_size_kb": 64, 00:15:42.609 "state": "online", 00:15:42.609 "raid_level": "raid5f", 00:15:42.609 "superblock": true, 00:15:42.609 "num_base_bdevs": 3, 00:15:42.609 "num_base_bdevs_discovered": 3, 00:15:42.609 "num_base_bdevs_operational": 3, 00:15:42.609 "base_bdevs_list": [ 00:15:42.609 { 00:15:42.609 "name": "spare", 00:15:42.609 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:42.609 "is_configured": true, 00:15:42.609 "data_offset": 2048, 00:15:42.609 "data_size": 63488 00:15:42.609 }, 00:15:42.609 { 00:15:42.609 "name": "BaseBdev2", 00:15:42.609 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:42.609 "is_configured": true, 00:15:42.609 "data_offset": 2048, 00:15:42.609 "data_size": 63488 00:15:42.609 }, 00:15:42.609 { 00:15:42.609 "name": "BaseBdev3", 00:15:42.609 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:42.609 "is_configured": true, 00:15:42.609 "data_offset": 2048, 00:15:42.609 "data_size": 63488 00:15:42.609 } 00:15:42.609 ] 00:15:42.609 }' 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.609 [2024-09-29 21:47:01.484947] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.609 "name": "raid_bdev1", 00:15:42.609 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:42.609 "strip_size_kb": 64, 00:15:42.609 "state": "online", 00:15:42.609 "raid_level": "raid5f", 00:15:42.609 "superblock": true, 00:15:42.609 "num_base_bdevs": 3, 00:15:42.609 "num_base_bdevs_discovered": 2, 00:15:42.609 "num_base_bdevs_operational": 2, 00:15:42.609 "base_bdevs_list": [ 00:15:42.609 { 00:15:42.609 "name": null, 00:15:42.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.609 "is_configured": false, 00:15:42.609 "data_offset": 0, 00:15:42.609 "data_size": 63488 00:15:42.609 }, 00:15:42.609 { 00:15:42.609 "name": "BaseBdev2", 00:15:42.609 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:42.609 "is_configured": true, 00:15:42.609 "data_offset": 2048, 00:15:42.609 "data_size": 63488 00:15:42.609 }, 00:15:42.609 { 00:15:42.609 "name": "BaseBdev3", 00:15:42.609 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:42.609 "is_configured": true, 00:15:42.609 "data_offset": 2048, 00:15:42.609 "data_size": 63488 00:15:42.609 } 00:15:42.609 ] 00:15:42.609 }' 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.609 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.178 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.178 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.178 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.178 [2024-09-29 21:47:01.968168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.178 [2024-09-29 21:47:01.968333] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:43.178 [2024-09-29 21:47:01.968357] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:43.178 [2024-09-29 21:47:01.968386] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.179 [2024-09-29 21:47:01.982711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:43.179 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.179 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:43.179 [2024-09-29 21:47:01.989739] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.117 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.117 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.117 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.117 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.117 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.117 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.117 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.117 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.117 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.117 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.117 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.117 "name": "raid_bdev1", 00:15:44.117 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:44.117 "strip_size_kb": 64, 00:15:44.117 "state": "online", 00:15:44.117 "raid_level": "raid5f", 00:15:44.117 "superblock": true, 00:15:44.117 "num_base_bdevs": 3, 00:15:44.117 "num_base_bdevs_discovered": 3, 00:15:44.117 "num_base_bdevs_operational": 3, 00:15:44.117 "process": { 00:15:44.117 "type": "rebuild", 00:15:44.117 "target": "spare", 00:15:44.117 "progress": { 00:15:44.117 "blocks": 20480, 00:15:44.117 "percent": 16 00:15:44.117 } 00:15:44.117 }, 00:15:44.117 "base_bdevs_list": [ 00:15:44.117 { 00:15:44.117 "name": "spare", 00:15:44.117 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:44.117 "is_configured": true, 00:15:44.117 "data_offset": 2048, 00:15:44.117 "data_size": 63488 00:15:44.117 }, 00:15:44.117 { 00:15:44.117 "name": "BaseBdev2", 00:15:44.117 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:44.117 "is_configured": true, 00:15:44.117 "data_offset": 2048, 00:15:44.117 "data_size": 63488 00:15:44.117 }, 00:15:44.117 { 00:15:44.117 "name": "BaseBdev3", 00:15:44.117 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:44.117 "is_configured": true, 00:15:44.117 "data_offset": 2048, 00:15:44.117 "data_size": 63488 00:15:44.117 } 00:15:44.117 ] 00:15:44.117 }' 00:15:44.117 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.117 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.117 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.377 [2024-09-29 21:47:03.124803] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.377 [2024-09-29 21:47:03.196822] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:44.377 [2024-09-29 21:47:03.196877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.377 [2024-09-29 21:47:03.196891] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:44.377 [2024-09-29 21:47:03.196900] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.377 "name": "raid_bdev1", 00:15:44.377 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:44.377 "strip_size_kb": 64, 00:15:44.377 "state": "online", 00:15:44.377 "raid_level": "raid5f", 00:15:44.377 "superblock": true, 00:15:44.377 "num_base_bdevs": 3, 00:15:44.377 "num_base_bdevs_discovered": 2, 00:15:44.377 "num_base_bdevs_operational": 2, 00:15:44.377 "base_bdevs_list": [ 00:15:44.377 { 00:15:44.377 "name": null, 00:15:44.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.377 "is_configured": false, 00:15:44.377 "data_offset": 0, 00:15:44.377 "data_size": 63488 00:15:44.377 }, 00:15:44.377 { 00:15:44.377 "name": "BaseBdev2", 00:15:44.377 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:44.377 "is_configured": true, 00:15:44.377 "data_offset": 2048, 00:15:44.377 "data_size": 63488 00:15:44.377 }, 00:15:44.377 { 00:15:44.377 "name": "BaseBdev3", 00:15:44.377 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:44.377 "is_configured": true, 00:15:44.377 "data_offset": 2048, 00:15:44.377 "data_size": 63488 00:15:44.377 } 00:15:44.377 ] 00:15:44.377 }' 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.377 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:44.947 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.947 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 [2024-09-29 21:47:03.694630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:44.947 [2024-09-29 21:47:03.694682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.947 [2024-09-29 21:47:03.694700] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:44.947 [2024-09-29 21:47:03.694712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.947 [2024-09-29 21:47:03.695162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.947 [2024-09-29 21:47:03.695185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:44.947 [2024-09-29 21:47:03.695261] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:44.947 [2024-09-29 21:47:03.695276] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:44.947 [2024-09-29 21:47:03.695285] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:44.947 [2024-09-29 21:47:03.695304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:44.947 [2024-09-29 21:47:03.709345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:44.947 spare 00:15:44.947 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.947 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:44.947 [2024-09-29 21:47:03.716308] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.887 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.887 "name": "raid_bdev1", 00:15:45.887 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:45.887 "strip_size_kb": 64, 00:15:45.887 "state": "online", 00:15:45.887 "raid_level": "raid5f", 00:15:45.887 "superblock": true, 00:15:45.887 "num_base_bdevs": 3, 00:15:45.887 "num_base_bdevs_discovered": 3, 00:15:45.887 "num_base_bdevs_operational": 3, 00:15:45.887 "process": { 00:15:45.887 "type": "rebuild", 00:15:45.887 "target": "spare", 00:15:45.887 "progress": { 00:15:45.887 "blocks": 20480, 00:15:45.887 "percent": 16 00:15:45.887 } 00:15:45.887 }, 00:15:45.887 "base_bdevs_list": [ 00:15:45.887 { 00:15:45.887 "name": "spare", 00:15:45.887 "uuid": "8492fc5b-2bfd-5b3a-b19f-171a8f341e5e", 00:15:45.887 "is_configured": true, 00:15:45.887 "data_offset": 2048, 00:15:45.887 "data_size": 63488 00:15:45.887 }, 00:15:45.887 { 00:15:45.887 "name": "BaseBdev2", 00:15:45.888 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:45.888 "is_configured": true, 00:15:45.888 "data_offset": 2048, 00:15:45.888 "data_size": 63488 00:15:45.888 }, 00:15:45.888 { 00:15:45.888 "name": "BaseBdev3", 00:15:45.888 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:45.888 "is_configured": true, 00:15:45.888 "data_offset": 2048, 00:15:45.888 "data_size": 63488 00:15:45.888 } 00:15:45.888 ] 00:15:45.888 }' 00:15:45.888 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.888 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.888 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.888 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.888 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:45.888 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.888 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.148 [2024-09-29 21:47:04.871236] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.148 [2024-09-29 21:47:04.923306] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:46.148 [2024-09-29 21:47:04.923352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.148 [2024-09-29 21:47:04.923368] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.148 [2024-09-29 21:47:04.923375] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.148 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.148 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.148 "name": "raid_bdev1", 00:15:46.148 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:46.148 "strip_size_kb": 64, 00:15:46.148 "state": "online", 00:15:46.148 "raid_level": "raid5f", 00:15:46.148 "superblock": true, 00:15:46.148 "num_base_bdevs": 3, 00:15:46.148 "num_base_bdevs_discovered": 2, 00:15:46.148 "num_base_bdevs_operational": 2, 00:15:46.148 "base_bdevs_list": [ 00:15:46.148 { 00:15:46.148 "name": null, 00:15:46.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.148 "is_configured": false, 00:15:46.148 "data_offset": 0, 00:15:46.148 "data_size": 63488 00:15:46.148 }, 00:15:46.148 { 00:15:46.148 "name": "BaseBdev2", 00:15:46.148 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:46.148 "is_configured": true, 00:15:46.148 "data_offset": 2048, 00:15:46.148 "data_size": 63488 00:15:46.148 }, 00:15:46.148 { 00:15:46.148 "name": "BaseBdev3", 00:15:46.148 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:46.148 "is_configured": true, 00:15:46.148 "data_offset": 2048, 00:15:46.148 "data_size": 63488 00:15:46.148 } 00:15:46.148 ] 00:15:46.148 }' 00:15:46.148 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.148 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.718 "name": "raid_bdev1", 00:15:46.718 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:46.718 "strip_size_kb": 64, 00:15:46.718 "state": "online", 00:15:46.718 "raid_level": "raid5f", 00:15:46.718 "superblock": true, 00:15:46.718 "num_base_bdevs": 3, 00:15:46.718 "num_base_bdevs_discovered": 2, 00:15:46.718 "num_base_bdevs_operational": 2, 00:15:46.718 "base_bdevs_list": [ 00:15:46.718 { 00:15:46.718 "name": null, 00:15:46.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.718 "is_configured": false, 00:15:46.718 "data_offset": 0, 00:15:46.718 "data_size": 63488 00:15:46.718 }, 00:15:46.718 { 00:15:46.718 "name": "BaseBdev2", 00:15:46.718 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:46.718 "is_configured": true, 00:15:46.718 "data_offset": 2048, 00:15:46.718 "data_size": 63488 00:15:46.718 }, 00:15:46.718 { 00:15:46.718 "name": "BaseBdev3", 00:15:46.718 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:46.718 "is_configured": true, 00:15:46.718 "data_offset": 2048, 00:15:46.718 "data_size": 63488 00:15:46.718 } 00:15:46.718 ] 00:15:46.718 }' 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.718 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.718 [2024-09-29 21:47:05.545120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:46.718 [2024-09-29 21:47:05.545168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.718 [2024-09-29 21:47:05.545190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:46.718 [2024-09-29 21:47:05.545198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.718 [2024-09-29 21:47:05.545611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.718 [2024-09-29 21:47:05.545628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:46.718 [2024-09-29 21:47:05.545696] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:46.718 [2024-09-29 21:47:05.545709] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.718 [2024-09-29 21:47:05.545720] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:46.718 [2024-09-29 21:47:05.545732] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:46.719 BaseBdev1 00:15:46.719 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.719 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.658 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.658 "name": "raid_bdev1", 00:15:47.658 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:47.658 "strip_size_kb": 64, 00:15:47.659 "state": "online", 00:15:47.659 "raid_level": "raid5f", 00:15:47.659 "superblock": true, 00:15:47.659 "num_base_bdevs": 3, 00:15:47.659 "num_base_bdevs_discovered": 2, 00:15:47.659 "num_base_bdevs_operational": 2, 00:15:47.659 "base_bdevs_list": [ 00:15:47.659 { 00:15:47.659 "name": null, 00:15:47.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.659 "is_configured": false, 00:15:47.659 "data_offset": 0, 00:15:47.659 "data_size": 63488 00:15:47.659 }, 00:15:47.659 { 00:15:47.659 "name": "BaseBdev2", 00:15:47.659 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:47.659 "is_configured": true, 00:15:47.659 "data_offset": 2048, 00:15:47.659 "data_size": 63488 00:15:47.659 }, 00:15:47.659 { 00:15:47.659 "name": "BaseBdev3", 00:15:47.659 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:47.659 "is_configured": true, 00:15:47.659 "data_offset": 2048, 00:15:47.659 "data_size": 63488 00:15:47.659 } 00:15:47.659 ] 00:15:47.659 }' 00:15:47.659 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.659 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.228 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.228 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.228 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.228 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.228 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.228 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.228 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.228 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.228 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.228 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.228 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.228 "name": "raid_bdev1", 00:15:48.228 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:48.228 "strip_size_kb": 64, 00:15:48.228 "state": "online", 00:15:48.228 "raid_level": "raid5f", 00:15:48.228 "superblock": true, 00:15:48.228 "num_base_bdevs": 3, 00:15:48.228 "num_base_bdevs_discovered": 2, 00:15:48.229 "num_base_bdevs_operational": 2, 00:15:48.229 "base_bdevs_list": [ 00:15:48.229 { 00:15:48.229 "name": null, 00:15:48.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.229 "is_configured": false, 00:15:48.229 "data_offset": 0, 00:15:48.229 "data_size": 63488 00:15:48.229 }, 00:15:48.229 { 00:15:48.229 "name": "BaseBdev2", 00:15:48.229 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:48.229 "is_configured": true, 00:15:48.229 "data_offset": 2048, 00:15:48.229 "data_size": 63488 00:15:48.229 }, 00:15:48.229 { 00:15:48.229 "name": "BaseBdev3", 00:15:48.229 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:48.229 "is_configured": true, 00:15:48.229 "data_offset": 2048, 00:15:48.229 "data_size": 63488 00:15:48.229 } 00:15:48.229 ] 00:15:48.229 }' 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.229 [2024-09-29 21:47:07.146393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.229 [2024-09-29 21:47:07.146529] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:48.229 [2024-09-29 21:47:07.146543] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:48.229 request: 00:15:48.229 { 00:15:48.229 "base_bdev": "BaseBdev1", 00:15:48.229 "raid_bdev": "raid_bdev1", 00:15:48.229 "method": "bdev_raid_add_base_bdev", 00:15:48.229 "req_id": 1 00:15:48.229 } 00:15:48.229 Got JSON-RPC error response 00:15:48.229 response: 00:15:48.229 { 00:15:48.229 "code": -22, 00:15:48.229 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:48.229 } 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.229 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.612 "name": "raid_bdev1", 00:15:49.612 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:49.612 "strip_size_kb": 64, 00:15:49.612 "state": "online", 00:15:49.612 "raid_level": "raid5f", 00:15:49.612 "superblock": true, 00:15:49.612 "num_base_bdevs": 3, 00:15:49.612 "num_base_bdevs_discovered": 2, 00:15:49.612 "num_base_bdevs_operational": 2, 00:15:49.612 "base_bdevs_list": [ 00:15:49.612 { 00:15:49.612 "name": null, 00:15:49.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.612 "is_configured": false, 00:15:49.612 "data_offset": 0, 00:15:49.612 "data_size": 63488 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "name": "BaseBdev2", 00:15:49.612 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:49.612 "is_configured": true, 00:15:49.612 "data_offset": 2048, 00:15:49.612 "data_size": 63488 00:15:49.612 }, 00:15:49.612 { 00:15:49.612 "name": "BaseBdev3", 00:15:49.612 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:49.612 "is_configured": true, 00:15:49.612 "data_offset": 2048, 00:15:49.612 "data_size": 63488 00:15:49.612 } 00:15:49.612 ] 00:15:49.612 }' 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.612 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.872 "name": "raid_bdev1", 00:15:49.872 "uuid": "3a5e6dab-d0cd-4180-8ce3-7fbf31c1afc5", 00:15:49.872 "strip_size_kb": 64, 00:15:49.872 "state": "online", 00:15:49.872 "raid_level": "raid5f", 00:15:49.872 "superblock": true, 00:15:49.872 "num_base_bdevs": 3, 00:15:49.872 "num_base_bdevs_discovered": 2, 00:15:49.872 "num_base_bdevs_operational": 2, 00:15:49.872 "base_bdevs_list": [ 00:15:49.872 { 00:15:49.872 "name": null, 00:15:49.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.872 "is_configured": false, 00:15:49.872 "data_offset": 0, 00:15:49.872 "data_size": 63488 00:15:49.872 }, 00:15:49.872 { 00:15:49.872 "name": "BaseBdev2", 00:15:49.872 "uuid": "81bbc5c3-3486-5da7-838e-64a428c5a2c9", 00:15:49.872 "is_configured": true, 00:15:49.872 "data_offset": 2048, 00:15:49.872 "data_size": 63488 00:15:49.872 }, 00:15:49.872 { 00:15:49.872 "name": "BaseBdev3", 00:15:49.872 "uuid": "719983d5-f79d-5295-adc2-5a385c2fe352", 00:15:49.872 "is_configured": true, 00:15:49.872 "data_offset": 2048, 00:15:49.872 "data_size": 63488 00:15:49.872 } 00:15:49.872 ] 00:15:49.872 }' 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82057 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82057 ']' 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82057 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82057 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:49.872 killing process with pid 82057 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82057' 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82057 00:15:49.872 Received shutdown signal, test time was about 60.000000 seconds 00:15:49.872 00:15:49.872 Latency(us) 00:15:49.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.872 =================================================================================================================== 00:15:49.872 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:49.872 [2024-09-29 21:47:08.787146] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.872 [2024-09-29 21:47:08.787257] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.872 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82057 00:15:49.872 [2024-09-29 21:47:08.787319] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.872 [2024-09-29 21:47:08.787330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:50.448 [2024-09-29 21:47:09.154514] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.388 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:51.388 00:15:51.388 real 0m23.500s 00:15:51.388 user 0m29.957s 00:15:51.388 sys 0m3.140s 00:15:51.388 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.388 21:47:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.388 ************************************ 00:15:51.388 END TEST raid5f_rebuild_test_sb 00:15:51.388 ************************************ 00:15:51.649 21:47:10 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:51.649 21:47:10 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:51.649 21:47:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:51.649 21:47:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.649 21:47:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:51.649 ************************************ 00:15:51.649 START TEST raid5f_state_function_test 00:15:51.649 ************************************ 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:51.649 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82818 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:51.650 Process raid pid: 82818 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82818' 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82818 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 82818 ']' 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.650 21:47:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.650 [2024-09-29 21:47:10.523063] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:51.650 [2024-09-29 21:47:10.523210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.910 [2024-09-29 21:47:10.695200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.169 [2024-09-29 21:47:10.896314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.169 [2024-09-29 21:47:11.106334] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.169 [2024-09-29 21:47:11.106372] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 [2024-09-29 21:47:11.350619] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:52.430 [2024-09-29 21:47:11.350668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:52.430 [2024-09-29 21:47:11.350677] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.430 [2024-09-29 21:47:11.350686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.430 [2024-09-29 21:47:11.350691] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.430 [2024-09-29 21:47:11.350699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.430 [2024-09-29 21:47:11.350704] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:52.430 [2024-09-29 21:47:11.350714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.430 "name": "Existed_Raid", 00:15:52.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.430 "strip_size_kb": 64, 00:15:52.430 "state": "configuring", 00:15:52.430 "raid_level": "raid5f", 00:15:52.430 "superblock": false, 00:15:52.430 "num_base_bdevs": 4, 00:15:52.430 "num_base_bdevs_discovered": 0, 00:15:52.430 "num_base_bdevs_operational": 4, 00:15:52.430 "base_bdevs_list": [ 00:15:52.430 { 00:15:52.430 "name": "BaseBdev1", 00:15:52.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.430 "is_configured": false, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 0 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "name": "BaseBdev2", 00:15:52.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.430 "is_configured": false, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 0 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "name": "BaseBdev3", 00:15:52.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.430 "is_configured": false, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 0 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "name": "BaseBdev4", 00:15:52.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.430 "is_configured": false, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 0 00:15:52.430 } 00:15:52.430 ] 00:15:52.430 }' 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.430 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.001 [2024-09-29 21:47:11.773849] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.001 [2024-09-29 21:47:11.773885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.001 [2024-09-29 21:47:11.785852] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:53.001 [2024-09-29 21:47:11.785888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:53.001 [2024-09-29 21:47:11.785895] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.001 [2024-09-29 21:47:11.785903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.001 [2024-09-29 21:47:11.785909] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:53.001 [2024-09-29 21:47:11.785916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:53.001 [2024-09-29 21:47:11.785922] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:53.001 [2024-09-29 21:47:11.785929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.001 [2024-09-29 21:47:11.864594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.001 BaseBdev1 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.001 [ 00:15:53.001 { 00:15:53.001 "name": "BaseBdev1", 00:15:53.001 "aliases": [ 00:15:53.001 "9e08a03d-2bd0-4d9d-ab90-e0b13d32ddfb" 00:15:53.001 ], 00:15:53.001 "product_name": "Malloc disk", 00:15:53.001 "block_size": 512, 00:15:53.001 "num_blocks": 65536, 00:15:53.001 "uuid": "9e08a03d-2bd0-4d9d-ab90-e0b13d32ddfb", 00:15:53.001 "assigned_rate_limits": { 00:15:53.001 "rw_ios_per_sec": 0, 00:15:53.001 "rw_mbytes_per_sec": 0, 00:15:53.001 "r_mbytes_per_sec": 0, 00:15:53.001 "w_mbytes_per_sec": 0 00:15:53.001 }, 00:15:53.001 "claimed": true, 00:15:53.001 "claim_type": "exclusive_write", 00:15:53.001 "zoned": false, 00:15:53.001 "supported_io_types": { 00:15:53.001 "read": true, 00:15:53.001 "write": true, 00:15:53.001 "unmap": true, 00:15:53.001 "flush": true, 00:15:53.001 "reset": true, 00:15:53.001 "nvme_admin": false, 00:15:53.001 "nvme_io": false, 00:15:53.001 "nvme_io_md": false, 00:15:53.001 "write_zeroes": true, 00:15:53.001 "zcopy": true, 00:15:53.001 "get_zone_info": false, 00:15:53.001 "zone_management": false, 00:15:53.001 "zone_append": false, 00:15:53.001 "compare": false, 00:15:53.001 "compare_and_write": false, 00:15:53.001 "abort": true, 00:15:53.001 "seek_hole": false, 00:15:53.001 "seek_data": false, 00:15:53.001 "copy": true, 00:15:53.001 "nvme_iov_md": false 00:15:53.001 }, 00:15:53.001 "memory_domains": [ 00:15:53.001 { 00:15:53.001 "dma_device_id": "system", 00:15:53.001 "dma_device_type": 1 00:15:53.001 }, 00:15:53.001 { 00:15:53.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.001 "dma_device_type": 2 00:15:53.001 } 00:15:53.001 ], 00:15:53.001 "driver_specific": {} 00:15:53.001 } 00:15:53.001 ] 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.001 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.001 "name": "Existed_Raid", 00:15:53.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.001 "strip_size_kb": 64, 00:15:53.001 "state": "configuring", 00:15:53.001 "raid_level": "raid5f", 00:15:53.001 "superblock": false, 00:15:53.001 "num_base_bdevs": 4, 00:15:53.001 "num_base_bdevs_discovered": 1, 00:15:53.001 "num_base_bdevs_operational": 4, 00:15:53.001 "base_bdevs_list": [ 00:15:53.001 { 00:15:53.001 "name": "BaseBdev1", 00:15:53.001 "uuid": "9e08a03d-2bd0-4d9d-ab90-e0b13d32ddfb", 00:15:53.001 "is_configured": true, 00:15:53.001 "data_offset": 0, 00:15:53.001 "data_size": 65536 00:15:53.001 }, 00:15:53.001 { 00:15:53.001 "name": "BaseBdev2", 00:15:53.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.001 "is_configured": false, 00:15:53.001 "data_offset": 0, 00:15:53.001 "data_size": 0 00:15:53.001 }, 00:15:53.001 { 00:15:53.001 "name": "BaseBdev3", 00:15:53.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.001 "is_configured": false, 00:15:53.001 "data_offset": 0, 00:15:53.001 "data_size": 0 00:15:53.002 }, 00:15:53.002 { 00:15:53.002 "name": "BaseBdev4", 00:15:53.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.002 "is_configured": false, 00:15:53.002 "data_offset": 0, 00:15:53.002 "data_size": 0 00:15:53.002 } 00:15:53.002 ] 00:15:53.002 }' 00:15:53.002 21:47:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.002 21:47:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.571 [2024-09-29 21:47:12.327876] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:53.571 [2024-09-29 21:47:12.327922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.571 [2024-09-29 21:47:12.339905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:53.571 [2024-09-29 21:47:12.341692] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:53.571 [2024-09-29 21:47:12.341733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:53.571 [2024-09-29 21:47:12.341742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:53.571 [2024-09-29 21:47:12.341752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:53.571 [2024-09-29 21:47:12.341758] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:53.571 [2024-09-29 21:47:12.341766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:53.571 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.572 "name": "Existed_Raid", 00:15:53.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.572 "strip_size_kb": 64, 00:15:53.572 "state": "configuring", 00:15:53.572 "raid_level": "raid5f", 00:15:53.572 "superblock": false, 00:15:53.572 "num_base_bdevs": 4, 00:15:53.572 "num_base_bdevs_discovered": 1, 00:15:53.572 "num_base_bdevs_operational": 4, 00:15:53.572 "base_bdevs_list": [ 00:15:53.572 { 00:15:53.572 "name": "BaseBdev1", 00:15:53.572 "uuid": "9e08a03d-2bd0-4d9d-ab90-e0b13d32ddfb", 00:15:53.572 "is_configured": true, 00:15:53.572 "data_offset": 0, 00:15:53.572 "data_size": 65536 00:15:53.572 }, 00:15:53.572 { 00:15:53.572 "name": "BaseBdev2", 00:15:53.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.572 "is_configured": false, 00:15:53.572 "data_offset": 0, 00:15:53.572 "data_size": 0 00:15:53.572 }, 00:15:53.572 { 00:15:53.572 "name": "BaseBdev3", 00:15:53.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.572 "is_configured": false, 00:15:53.572 "data_offset": 0, 00:15:53.572 "data_size": 0 00:15:53.572 }, 00:15:53.572 { 00:15:53.572 "name": "BaseBdev4", 00:15:53.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.572 "is_configured": false, 00:15:53.572 "data_offset": 0, 00:15:53.572 "data_size": 0 00:15:53.572 } 00:15:53.572 ] 00:15:53.572 }' 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.572 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.142 [2024-09-29 21:47:12.859365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.142 BaseBdev2 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.142 [ 00:15:54.142 { 00:15:54.142 "name": "BaseBdev2", 00:15:54.142 "aliases": [ 00:15:54.142 "6dcedb7a-2bcf-48bf-9585-d7f50836664f" 00:15:54.142 ], 00:15:54.142 "product_name": "Malloc disk", 00:15:54.142 "block_size": 512, 00:15:54.142 "num_blocks": 65536, 00:15:54.142 "uuid": "6dcedb7a-2bcf-48bf-9585-d7f50836664f", 00:15:54.142 "assigned_rate_limits": { 00:15:54.142 "rw_ios_per_sec": 0, 00:15:54.142 "rw_mbytes_per_sec": 0, 00:15:54.142 "r_mbytes_per_sec": 0, 00:15:54.142 "w_mbytes_per_sec": 0 00:15:54.142 }, 00:15:54.142 "claimed": true, 00:15:54.142 "claim_type": "exclusive_write", 00:15:54.142 "zoned": false, 00:15:54.142 "supported_io_types": { 00:15:54.142 "read": true, 00:15:54.142 "write": true, 00:15:54.142 "unmap": true, 00:15:54.142 "flush": true, 00:15:54.142 "reset": true, 00:15:54.142 "nvme_admin": false, 00:15:54.142 "nvme_io": false, 00:15:54.142 "nvme_io_md": false, 00:15:54.142 "write_zeroes": true, 00:15:54.142 "zcopy": true, 00:15:54.142 "get_zone_info": false, 00:15:54.142 "zone_management": false, 00:15:54.142 "zone_append": false, 00:15:54.142 "compare": false, 00:15:54.142 "compare_and_write": false, 00:15:54.142 "abort": true, 00:15:54.142 "seek_hole": false, 00:15:54.142 "seek_data": false, 00:15:54.142 "copy": true, 00:15:54.142 "nvme_iov_md": false 00:15:54.142 }, 00:15:54.142 "memory_domains": [ 00:15:54.142 { 00:15:54.142 "dma_device_id": "system", 00:15:54.142 "dma_device_type": 1 00:15:54.142 }, 00:15:54.142 { 00:15:54.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.142 "dma_device_type": 2 00:15:54.142 } 00:15:54.142 ], 00:15:54.142 "driver_specific": {} 00:15:54.142 } 00:15:54.142 ] 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.142 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.143 "name": "Existed_Raid", 00:15:54.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.143 "strip_size_kb": 64, 00:15:54.143 "state": "configuring", 00:15:54.143 "raid_level": "raid5f", 00:15:54.143 "superblock": false, 00:15:54.143 "num_base_bdevs": 4, 00:15:54.143 "num_base_bdevs_discovered": 2, 00:15:54.143 "num_base_bdevs_operational": 4, 00:15:54.143 "base_bdevs_list": [ 00:15:54.143 { 00:15:54.143 "name": "BaseBdev1", 00:15:54.143 "uuid": "9e08a03d-2bd0-4d9d-ab90-e0b13d32ddfb", 00:15:54.143 "is_configured": true, 00:15:54.143 "data_offset": 0, 00:15:54.143 "data_size": 65536 00:15:54.143 }, 00:15:54.143 { 00:15:54.143 "name": "BaseBdev2", 00:15:54.143 "uuid": "6dcedb7a-2bcf-48bf-9585-d7f50836664f", 00:15:54.143 "is_configured": true, 00:15:54.143 "data_offset": 0, 00:15:54.143 "data_size": 65536 00:15:54.143 }, 00:15:54.143 { 00:15:54.143 "name": "BaseBdev3", 00:15:54.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.143 "is_configured": false, 00:15:54.143 "data_offset": 0, 00:15:54.143 "data_size": 0 00:15:54.143 }, 00:15:54.143 { 00:15:54.143 "name": "BaseBdev4", 00:15:54.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.143 "is_configured": false, 00:15:54.143 "data_offset": 0, 00:15:54.143 "data_size": 0 00:15:54.143 } 00:15:54.143 ] 00:15:54.143 }' 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.143 21:47:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.411 [2024-09-29 21:47:13.365847] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.411 BaseBdev3 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.411 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.684 [ 00:15:54.684 { 00:15:54.684 "name": "BaseBdev3", 00:15:54.684 "aliases": [ 00:15:54.684 "06080fef-17c4-46a2-bf95-a7f489491b72" 00:15:54.684 ], 00:15:54.684 "product_name": "Malloc disk", 00:15:54.684 "block_size": 512, 00:15:54.684 "num_blocks": 65536, 00:15:54.684 "uuid": "06080fef-17c4-46a2-bf95-a7f489491b72", 00:15:54.684 "assigned_rate_limits": { 00:15:54.684 "rw_ios_per_sec": 0, 00:15:54.684 "rw_mbytes_per_sec": 0, 00:15:54.684 "r_mbytes_per_sec": 0, 00:15:54.684 "w_mbytes_per_sec": 0 00:15:54.684 }, 00:15:54.684 "claimed": true, 00:15:54.684 "claim_type": "exclusive_write", 00:15:54.684 "zoned": false, 00:15:54.684 "supported_io_types": { 00:15:54.684 "read": true, 00:15:54.684 "write": true, 00:15:54.684 "unmap": true, 00:15:54.684 "flush": true, 00:15:54.684 "reset": true, 00:15:54.684 "nvme_admin": false, 00:15:54.684 "nvme_io": false, 00:15:54.684 "nvme_io_md": false, 00:15:54.684 "write_zeroes": true, 00:15:54.684 "zcopy": true, 00:15:54.684 "get_zone_info": false, 00:15:54.684 "zone_management": false, 00:15:54.684 "zone_append": false, 00:15:54.684 "compare": false, 00:15:54.684 "compare_and_write": false, 00:15:54.684 "abort": true, 00:15:54.684 "seek_hole": false, 00:15:54.684 "seek_data": false, 00:15:54.684 "copy": true, 00:15:54.684 "nvme_iov_md": false 00:15:54.684 }, 00:15:54.684 "memory_domains": [ 00:15:54.684 { 00:15:54.684 "dma_device_id": "system", 00:15:54.684 "dma_device_type": 1 00:15:54.684 }, 00:15:54.684 { 00:15:54.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.684 "dma_device_type": 2 00:15:54.684 } 00:15:54.684 ], 00:15:54.684 "driver_specific": {} 00:15:54.684 } 00:15:54.684 ] 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.684 "name": "Existed_Raid", 00:15:54.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.684 "strip_size_kb": 64, 00:15:54.684 "state": "configuring", 00:15:54.684 "raid_level": "raid5f", 00:15:54.684 "superblock": false, 00:15:54.684 "num_base_bdevs": 4, 00:15:54.684 "num_base_bdevs_discovered": 3, 00:15:54.684 "num_base_bdevs_operational": 4, 00:15:54.684 "base_bdevs_list": [ 00:15:54.684 { 00:15:54.684 "name": "BaseBdev1", 00:15:54.684 "uuid": "9e08a03d-2bd0-4d9d-ab90-e0b13d32ddfb", 00:15:54.684 "is_configured": true, 00:15:54.684 "data_offset": 0, 00:15:54.684 "data_size": 65536 00:15:54.684 }, 00:15:54.684 { 00:15:54.684 "name": "BaseBdev2", 00:15:54.684 "uuid": "6dcedb7a-2bcf-48bf-9585-d7f50836664f", 00:15:54.684 "is_configured": true, 00:15:54.684 "data_offset": 0, 00:15:54.684 "data_size": 65536 00:15:54.684 }, 00:15:54.684 { 00:15:54.684 "name": "BaseBdev3", 00:15:54.684 "uuid": "06080fef-17c4-46a2-bf95-a7f489491b72", 00:15:54.684 "is_configured": true, 00:15:54.684 "data_offset": 0, 00:15:54.684 "data_size": 65536 00:15:54.684 }, 00:15:54.684 { 00:15:54.684 "name": "BaseBdev4", 00:15:54.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.684 "is_configured": false, 00:15:54.684 "data_offset": 0, 00:15:54.684 "data_size": 0 00:15:54.684 } 00:15:54.684 ] 00:15:54.684 }' 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.684 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.961 [2024-09-29 21:47:13.796091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:54.961 [2024-09-29 21:47:13.796165] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:54.961 [2024-09-29 21:47:13.796178] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:54.961 [2024-09-29 21:47:13.796421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:54.961 [2024-09-29 21:47:13.803593] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:54.961 [2024-09-29 21:47:13.803618] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:54.961 [2024-09-29 21:47:13.803843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.961 BaseBdev4 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.961 [ 00:15:54.961 { 00:15:54.961 "name": "BaseBdev4", 00:15:54.961 "aliases": [ 00:15:54.961 "e5e1a0c9-c032-4034-9395-7e5585890f83" 00:15:54.961 ], 00:15:54.961 "product_name": "Malloc disk", 00:15:54.961 "block_size": 512, 00:15:54.961 "num_blocks": 65536, 00:15:54.961 "uuid": "e5e1a0c9-c032-4034-9395-7e5585890f83", 00:15:54.961 "assigned_rate_limits": { 00:15:54.961 "rw_ios_per_sec": 0, 00:15:54.961 "rw_mbytes_per_sec": 0, 00:15:54.961 "r_mbytes_per_sec": 0, 00:15:54.961 "w_mbytes_per_sec": 0 00:15:54.961 }, 00:15:54.961 "claimed": true, 00:15:54.961 "claim_type": "exclusive_write", 00:15:54.961 "zoned": false, 00:15:54.961 "supported_io_types": { 00:15:54.961 "read": true, 00:15:54.961 "write": true, 00:15:54.961 "unmap": true, 00:15:54.961 "flush": true, 00:15:54.961 "reset": true, 00:15:54.961 "nvme_admin": false, 00:15:54.961 "nvme_io": false, 00:15:54.961 "nvme_io_md": false, 00:15:54.961 "write_zeroes": true, 00:15:54.961 "zcopy": true, 00:15:54.961 "get_zone_info": false, 00:15:54.961 "zone_management": false, 00:15:54.961 "zone_append": false, 00:15:54.961 "compare": false, 00:15:54.961 "compare_and_write": false, 00:15:54.961 "abort": true, 00:15:54.961 "seek_hole": false, 00:15:54.961 "seek_data": false, 00:15:54.961 "copy": true, 00:15:54.961 "nvme_iov_md": false 00:15:54.961 }, 00:15:54.961 "memory_domains": [ 00:15:54.961 { 00:15:54.961 "dma_device_id": "system", 00:15:54.961 "dma_device_type": 1 00:15:54.961 }, 00:15:54.961 { 00:15:54.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.961 "dma_device_type": 2 00:15:54.961 } 00:15:54.961 ], 00:15:54.961 "driver_specific": {} 00:15:54.961 } 00:15:54.961 ] 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.961 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.962 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.962 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.962 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.962 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.962 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.962 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.962 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.962 "name": "Existed_Raid", 00:15:54.962 "uuid": "9d7c209c-83ad-47e0-a1fc-2791de757284", 00:15:54.962 "strip_size_kb": 64, 00:15:54.962 "state": "online", 00:15:54.962 "raid_level": "raid5f", 00:15:54.962 "superblock": false, 00:15:54.962 "num_base_bdevs": 4, 00:15:54.962 "num_base_bdevs_discovered": 4, 00:15:54.962 "num_base_bdevs_operational": 4, 00:15:54.962 "base_bdevs_list": [ 00:15:54.962 { 00:15:54.962 "name": "BaseBdev1", 00:15:54.962 "uuid": "9e08a03d-2bd0-4d9d-ab90-e0b13d32ddfb", 00:15:54.962 "is_configured": true, 00:15:54.962 "data_offset": 0, 00:15:54.962 "data_size": 65536 00:15:54.962 }, 00:15:54.962 { 00:15:54.962 "name": "BaseBdev2", 00:15:54.962 "uuid": "6dcedb7a-2bcf-48bf-9585-d7f50836664f", 00:15:54.962 "is_configured": true, 00:15:54.962 "data_offset": 0, 00:15:54.962 "data_size": 65536 00:15:54.962 }, 00:15:54.962 { 00:15:54.962 "name": "BaseBdev3", 00:15:54.962 "uuid": "06080fef-17c4-46a2-bf95-a7f489491b72", 00:15:54.962 "is_configured": true, 00:15:54.962 "data_offset": 0, 00:15:54.962 "data_size": 65536 00:15:54.962 }, 00:15:54.962 { 00:15:54.962 "name": "BaseBdev4", 00:15:54.962 "uuid": "e5e1a0c9-c032-4034-9395-7e5585890f83", 00:15:54.962 "is_configured": true, 00:15:54.962 "data_offset": 0, 00:15:54.962 "data_size": 65536 00:15:54.962 } 00:15:54.962 ] 00:15:54.962 }' 00:15:54.962 21:47:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.962 21:47:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.544 [2024-09-29 21:47:14.262747] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.544 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:55.544 "name": "Existed_Raid", 00:15:55.544 "aliases": [ 00:15:55.544 "9d7c209c-83ad-47e0-a1fc-2791de757284" 00:15:55.544 ], 00:15:55.544 "product_name": "Raid Volume", 00:15:55.544 "block_size": 512, 00:15:55.544 "num_blocks": 196608, 00:15:55.544 "uuid": "9d7c209c-83ad-47e0-a1fc-2791de757284", 00:15:55.544 "assigned_rate_limits": { 00:15:55.544 "rw_ios_per_sec": 0, 00:15:55.544 "rw_mbytes_per_sec": 0, 00:15:55.544 "r_mbytes_per_sec": 0, 00:15:55.544 "w_mbytes_per_sec": 0 00:15:55.544 }, 00:15:55.544 "claimed": false, 00:15:55.545 "zoned": false, 00:15:55.545 "supported_io_types": { 00:15:55.545 "read": true, 00:15:55.545 "write": true, 00:15:55.545 "unmap": false, 00:15:55.545 "flush": false, 00:15:55.545 "reset": true, 00:15:55.545 "nvme_admin": false, 00:15:55.545 "nvme_io": false, 00:15:55.545 "nvme_io_md": false, 00:15:55.545 "write_zeroes": true, 00:15:55.545 "zcopy": false, 00:15:55.545 "get_zone_info": false, 00:15:55.545 "zone_management": false, 00:15:55.545 "zone_append": false, 00:15:55.545 "compare": false, 00:15:55.545 "compare_and_write": false, 00:15:55.545 "abort": false, 00:15:55.545 "seek_hole": false, 00:15:55.545 "seek_data": false, 00:15:55.545 "copy": false, 00:15:55.545 "nvme_iov_md": false 00:15:55.545 }, 00:15:55.545 "driver_specific": { 00:15:55.545 "raid": { 00:15:55.545 "uuid": "9d7c209c-83ad-47e0-a1fc-2791de757284", 00:15:55.545 "strip_size_kb": 64, 00:15:55.545 "state": "online", 00:15:55.545 "raid_level": "raid5f", 00:15:55.545 "superblock": false, 00:15:55.545 "num_base_bdevs": 4, 00:15:55.545 "num_base_bdevs_discovered": 4, 00:15:55.545 "num_base_bdevs_operational": 4, 00:15:55.545 "base_bdevs_list": [ 00:15:55.545 { 00:15:55.545 "name": "BaseBdev1", 00:15:55.545 "uuid": "9e08a03d-2bd0-4d9d-ab90-e0b13d32ddfb", 00:15:55.545 "is_configured": true, 00:15:55.545 "data_offset": 0, 00:15:55.545 "data_size": 65536 00:15:55.545 }, 00:15:55.545 { 00:15:55.545 "name": "BaseBdev2", 00:15:55.545 "uuid": "6dcedb7a-2bcf-48bf-9585-d7f50836664f", 00:15:55.545 "is_configured": true, 00:15:55.545 "data_offset": 0, 00:15:55.545 "data_size": 65536 00:15:55.545 }, 00:15:55.545 { 00:15:55.545 "name": "BaseBdev3", 00:15:55.545 "uuid": "06080fef-17c4-46a2-bf95-a7f489491b72", 00:15:55.545 "is_configured": true, 00:15:55.545 "data_offset": 0, 00:15:55.545 "data_size": 65536 00:15:55.545 }, 00:15:55.545 { 00:15:55.545 "name": "BaseBdev4", 00:15:55.545 "uuid": "e5e1a0c9-c032-4034-9395-7e5585890f83", 00:15:55.545 "is_configured": true, 00:15:55.545 "data_offset": 0, 00:15:55.545 "data_size": 65536 00:15:55.545 } 00:15:55.545 ] 00:15:55.545 } 00:15:55.545 } 00:15:55.545 }' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:55.545 BaseBdev2 00:15:55.545 BaseBdev3 00:15:55.545 BaseBdev4' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:55.545 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.805 [2024-09-29 21:47:14.558160] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.805 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.806 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.806 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.806 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.806 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.806 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.806 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.806 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.806 "name": "Existed_Raid", 00:15:55.806 "uuid": "9d7c209c-83ad-47e0-a1fc-2791de757284", 00:15:55.806 "strip_size_kb": 64, 00:15:55.806 "state": "online", 00:15:55.806 "raid_level": "raid5f", 00:15:55.806 "superblock": false, 00:15:55.806 "num_base_bdevs": 4, 00:15:55.806 "num_base_bdevs_discovered": 3, 00:15:55.806 "num_base_bdevs_operational": 3, 00:15:55.806 "base_bdevs_list": [ 00:15:55.806 { 00:15:55.806 "name": null, 00:15:55.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.806 "is_configured": false, 00:15:55.806 "data_offset": 0, 00:15:55.806 "data_size": 65536 00:15:55.806 }, 00:15:55.806 { 00:15:55.806 "name": "BaseBdev2", 00:15:55.806 "uuid": "6dcedb7a-2bcf-48bf-9585-d7f50836664f", 00:15:55.806 "is_configured": true, 00:15:55.806 "data_offset": 0, 00:15:55.806 "data_size": 65536 00:15:55.806 }, 00:15:55.806 { 00:15:55.806 "name": "BaseBdev3", 00:15:55.806 "uuid": "06080fef-17c4-46a2-bf95-a7f489491b72", 00:15:55.806 "is_configured": true, 00:15:55.806 "data_offset": 0, 00:15:55.806 "data_size": 65536 00:15:55.806 }, 00:15:55.806 { 00:15:55.806 "name": "BaseBdev4", 00:15:55.806 "uuid": "e5e1a0c9-c032-4034-9395-7e5585890f83", 00:15:55.806 "is_configured": true, 00:15:55.806 "data_offset": 0, 00:15:55.806 "data_size": 65536 00:15:55.806 } 00:15:55.806 ] 00:15:55.806 }' 00:15:55.806 21:47:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.806 21:47:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.375 [2024-09-29 21:47:15.117722] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:56.375 [2024-09-29 21:47:15.117821] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:56.375 [2024-09-29 21:47:15.206355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.375 [2024-09-29 21:47:15.262252] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.375 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.634 [2024-09-29 21:47:15.408859] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:56.634 [2024-09-29 21:47:15.408909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.634 BaseBdev2 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.634 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.635 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.635 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.635 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.635 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.635 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.635 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.635 [ 00:15:56.635 { 00:15:56.635 "name": "BaseBdev2", 00:15:56.635 "aliases": [ 00:15:56.635 "54da5845-8d47-4df7-999d-4bf0662f5e2b" 00:15:56.635 ], 00:15:56.635 "product_name": "Malloc disk", 00:15:56.635 "block_size": 512, 00:15:56.635 "num_blocks": 65536, 00:15:56.635 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:15:56.635 "assigned_rate_limits": { 00:15:56.635 "rw_ios_per_sec": 0, 00:15:56.635 "rw_mbytes_per_sec": 0, 00:15:56.635 "r_mbytes_per_sec": 0, 00:15:56.635 "w_mbytes_per_sec": 0 00:15:56.635 }, 00:15:56.635 "claimed": false, 00:15:56.635 "zoned": false, 00:15:56.635 "supported_io_types": { 00:15:56.635 "read": true, 00:15:56.635 "write": true, 00:15:56.635 "unmap": true, 00:15:56.635 "flush": true, 00:15:56.635 "reset": true, 00:15:56.635 "nvme_admin": false, 00:15:56.635 "nvme_io": false, 00:15:56.635 "nvme_io_md": false, 00:15:56.635 "write_zeroes": true, 00:15:56.635 "zcopy": true, 00:15:56.635 "get_zone_info": false, 00:15:56.635 "zone_management": false, 00:15:56.635 "zone_append": false, 00:15:56.635 "compare": false, 00:15:56.635 "compare_and_write": false, 00:15:56.635 "abort": true, 00:15:56.635 "seek_hole": false, 00:15:56.635 "seek_data": false, 00:15:56.635 "copy": true, 00:15:56.635 "nvme_iov_md": false 00:15:56.894 }, 00:15:56.894 "memory_domains": [ 00:15:56.894 { 00:15:56.894 "dma_device_id": "system", 00:15:56.894 "dma_device_type": 1 00:15:56.894 }, 00:15:56.894 { 00:15:56.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.894 "dma_device_type": 2 00:15:56.894 } 00:15:56.894 ], 00:15:56.894 "driver_specific": {} 00:15:56.894 } 00:15:56.895 ] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.895 BaseBdev3 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.895 [ 00:15:56.895 { 00:15:56.895 "name": "BaseBdev3", 00:15:56.895 "aliases": [ 00:15:56.895 "4c328f94-a809-46dc-bb11-222c58d6a5b7" 00:15:56.895 ], 00:15:56.895 "product_name": "Malloc disk", 00:15:56.895 "block_size": 512, 00:15:56.895 "num_blocks": 65536, 00:15:56.895 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:15:56.895 "assigned_rate_limits": { 00:15:56.895 "rw_ios_per_sec": 0, 00:15:56.895 "rw_mbytes_per_sec": 0, 00:15:56.895 "r_mbytes_per_sec": 0, 00:15:56.895 "w_mbytes_per_sec": 0 00:15:56.895 }, 00:15:56.895 "claimed": false, 00:15:56.895 "zoned": false, 00:15:56.895 "supported_io_types": { 00:15:56.895 "read": true, 00:15:56.895 "write": true, 00:15:56.895 "unmap": true, 00:15:56.895 "flush": true, 00:15:56.895 "reset": true, 00:15:56.895 "nvme_admin": false, 00:15:56.895 "nvme_io": false, 00:15:56.895 "nvme_io_md": false, 00:15:56.895 "write_zeroes": true, 00:15:56.895 "zcopy": true, 00:15:56.895 "get_zone_info": false, 00:15:56.895 "zone_management": false, 00:15:56.895 "zone_append": false, 00:15:56.895 "compare": false, 00:15:56.895 "compare_and_write": false, 00:15:56.895 "abort": true, 00:15:56.895 "seek_hole": false, 00:15:56.895 "seek_data": false, 00:15:56.895 "copy": true, 00:15:56.895 "nvme_iov_md": false 00:15:56.895 }, 00:15:56.895 "memory_domains": [ 00:15:56.895 { 00:15:56.895 "dma_device_id": "system", 00:15:56.895 "dma_device_type": 1 00:15:56.895 }, 00:15:56.895 { 00:15:56.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.895 "dma_device_type": 2 00:15:56.895 } 00:15:56.895 ], 00:15:56.895 "driver_specific": {} 00:15:56.895 } 00:15:56.895 ] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.895 BaseBdev4 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.895 [ 00:15:56.895 { 00:15:56.895 "name": "BaseBdev4", 00:15:56.895 "aliases": [ 00:15:56.895 "165c5566-0347-47bb-9596-4dfd6e536583" 00:15:56.895 ], 00:15:56.895 "product_name": "Malloc disk", 00:15:56.895 "block_size": 512, 00:15:56.895 "num_blocks": 65536, 00:15:56.895 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:15:56.895 "assigned_rate_limits": { 00:15:56.895 "rw_ios_per_sec": 0, 00:15:56.895 "rw_mbytes_per_sec": 0, 00:15:56.895 "r_mbytes_per_sec": 0, 00:15:56.895 "w_mbytes_per_sec": 0 00:15:56.895 }, 00:15:56.895 "claimed": false, 00:15:56.895 "zoned": false, 00:15:56.895 "supported_io_types": { 00:15:56.895 "read": true, 00:15:56.895 "write": true, 00:15:56.895 "unmap": true, 00:15:56.895 "flush": true, 00:15:56.895 "reset": true, 00:15:56.895 "nvme_admin": false, 00:15:56.895 "nvme_io": false, 00:15:56.895 "nvme_io_md": false, 00:15:56.895 "write_zeroes": true, 00:15:56.895 "zcopy": true, 00:15:56.895 "get_zone_info": false, 00:15:56.895 "zone_management": false, 00:15:56.895 "zone_append": false, 00:15:56.895 "compare": false, 00:15:56.895 "compare_and_write": false, 00:15:56.895 "abort": true, 00:15:56.895 "seek_hole": false, 00:15:56.895 "seek_data": false, 00:15:56.895 "copy": true, 00:15:56.895 "nvme_iov_md": false 00:15:56.895 }, 00:15:56.895 "memory_domains": [ 00:15:56.895 { 00:15:56.895 "dma_device_id": "system", 00:15:56.895 "dma_device_type": 1 00:15:56.895 }, 00:15:56.895 { 00:15:56.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.895 "dma_device_type": 2 00:15:56.895 } 00:15:56.895 ], 00:15:56.895 "driver_specific": {} 00:15:56.895 } 00:15:56.895 ] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.895 [2024-09-29 21:47:15.782467] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:56.895 [2024-09-29 21:47:15.782510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:56.895 [2024-09-29 21:47:15.782531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.895 [2024-09-29 21:47:15.784149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.895 [2024-09-29 21:47:15.784202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.895 "name": "Existed_Raid", 00:15:56.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.895 "strip_size_kb": 64, 00:15:56.895 "state": "configuring", 00:15:56.895 "raid_level": "raid5f", 00:15:56.895 "superblock": false, 00:15:56.895 "num_base_bdevs": 4, 00:15:56.895 "num_base_bdevs_discovered": 3, 00:15:56.895 "num_base_bdevs_operational": 4, 00:15:56.895 "base_bdevs_list": [ 00:15:56.895 { 00:15:56.895 "name": "BaseBdev1", 00:15:56.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.895 "is_configured": false, 00:15:56.895 "data_offset": 0, 00:15:56.895 "data_size": 0 00:15:56.895 }, 00:15:56.895 { 00:15:56.895 "name": "BaseBdev2", 00:15:56.895 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:15:56.895 "is_configured": true, 00:15:56.895 "data_offset": 0, 00:15:56.895 "data_size": 65536 00:15:56.895 }, 00:15:56.895 { 00:15:56.895 "name": "BaseBdev3", 00:15:56.895 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:15:56.895 "is_configured": true, 00:15:56.895 "data_offset": 0, 00:15:56.895 "data_size": 65536 00:15:56.895 }, 00:15:56.895 { 00:15:56.895 "name": "BaseBdev4", 00:15:56.895 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:15:56.895 "is_configured": true, 00:15:56.895 "data_offset": 0, 00:15:56.895 "data_size": 65536 00:15:56.895 } 00:15:56.895 ] 00:15:56.895 }' 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.895 21:47:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.463 [2024-09-29 21:47:16.253607] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.463 "name": "Existed_Raid", 00:15:57.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.463 "strip_size_kb": 64, 00:15:57.463 "state": "configuring", 00:15:57.463 "raid_level": "raid5f", 00:15:57.463 "superblock": false, 00:15:57.463 "num_base_bdevs": 4, 00:15:57.463 "num_base_bdevs_discovered": 2, 00:15:57.463 "num_base_bdevs_operational": 4, 00:15:57.463 "base_bdevs_list": [ 00:15:57.463 { 00:15:57.463 "name": "BaseBdev1", 00:15:57.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.463 "is_configured": false, 00:15:57.463 "data_offset": 0, 00:15:57.463 "data_size": 0 00:15:57.463 }, 00:15:57.463 { 00:15:57.463 "name": null, 00:15:57.463 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:15:57.463 "is_configured": false, 00:15:57.463 "data_offset": 0, 00:15:57.463 "data_size": 65536 00:15:57.463 }, 00:15:57.463 { 00:15:57.463 "name": "BaseBdev3", 00:15:57.463 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:15:57.463 "is_configured": true, 00:15:57.463 "data_offset": 0, 00:15:57.463 "data_size": 65536 00:15:57.463 }, 00:15:57.463 { 00:15:57.463 "name": "BaseBdev4", 00:15:57.463 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:15:57.463 "is_configured": true, 00:15:57.463 "data_offset": 0, 00:15:57.463 "data_size": 65536 00:15:57.463 } 00:15:57.463 ] 00:15:57.463 }' 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.463 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.030 [2024-09-29 21:47:16.784890] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.030 BaseBdev1 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.030 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.030 [ 00:15:58.030 { 00:15:58.030 "name": "BaseBdev1", 00:15:58.030 "aliases": [ 00:15:58.030 "33aaeb19-7eff-4767-99e5-f80606ea6149" 00:15:58.030 ], 00:15:58.030 "product_name": "Malloc disk", 00:15:58.030 "block_size": 512, 00:15:58.030 "num_blocks": 65536, 00:15:58.030 "uuid": "33aaeb19-7eff-4767-99e5-f80606ea6149", 00:15:58.030 "assigned_rate_limits": { 00:15:58.030 "rw_ios_per_sec": 0, 00:15:58.030 "rw_mbytes_per_sec": 0, 00:15:58.030 "r_mbytes_per_sec": 0, 00:15:58.030 "w_mbytes_per_sec": 0 00:15:58.030 }, 00:15:58.030 "claimed": true, 00:15:58.030 "claim_type": "exclusive_write", 00:15:58.030 "zoned": false, 00:15:58.030 "supported_io_types": { 00:15:58.030 "read": true, 00:15:58.030 "write": true, 00:15:58.030 "unmap": true, 00:15:58.030 "flush": true, 00:15:58.030 "reset": true, 00:15:58.030 "nvme_admin": false, 00:15:58.030 "nvme_io": false, 00:15:58.030 "nvme_io_md": false, 00:15:58.030 "write_zeroes": true, 00:15:58.030 "zcopy": true, 00:15:58.030 "get_zone_info": false, 00:15:58.030 "zone_management": false, 00:15:58.030 "zone_append": false, 00:15:58.030 "compare": false, 00:15:58.030 "compare_and_write": false, 00:15:58.030 "abort": true, 00:15:58.030 "seek_hole": false, 00:15:58.030 "seek_data": false, 00:15:58.030 "copy": true, 00:15:58.030 "nvme_iov_md": false 00:15:58.030 }, 00:15:58.030 "memory_domains": [ 00:15:58.030 { 00:15:58.030 "dma_device_id": "system", 00:15:58.030 "dma_device_type": 1 00:15:58.030 }, 00:15:58.030 { 00:15:58.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.030 "dma_device_type": 2 00:15:58.030 } 00:15:58.030 ], 00:15:58.030 "driver_specific": {} 00:15:58.030 } 00:15:58.030 ] 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.031 "name": "Existed_Raid", 00:15:58.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.031 "strip_size_kb": 64, 00:15:58.031 "state": "configuring", 00:15:58.031 "raid_level": "raid5f", 00:15:58.031 "superblock": false, 00:15:58.031 "num_base_bdevs": 4, 00:15:58.031 "num_base_bdevs_discovered": 3, 00:15:58.031 "num_base_bdevs_operational": 4, 00:15:58.031 "base_bdevs_list": [ 00:15:58.031 { 00:15:58.031 "name": "BaseBdev1", 00:15:58.031 "uuid": "33aaeb19-7eff-4767-99e5-f80606ea6149", 00:15:58.031 "is_configured": true, 00:15:58.031 "data_offset": 0, 00:15:58.031 "data_size": 65536 00:15:58.031 }, 00:15:58.031 { 00:15:58.031 "name": null, 00:15:58.031 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:15:58.031 "is_configured": false, 00:15:58.031 "data_offset": 0, 00:15:58.031 "data_size": 65536 00:15:58.031 }, 00:15:58.031 { 00:15:58.031 "name": "BaseBdev3", 00:15:58.031 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:15:58.031 "is_configured": true, 00:15:58.031 "data_offset": 0, 00:15:58.031 "data_size": 65536 00:15:58.031 }, 00:15:58.031 { 00:15:58.031 "name": "BaseBdev4", 00:15:58.031 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:15:58.031 "is_configured": true, 00:15:58.031 "data_offset": 0, 00:15:58.031 "data_size": 65536 00:15:58.031 } 00:15:58.031 ] 00:15:58.031 }' 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.031 21:47:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.290 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.550 [2024-09-29 21:47:17.304128] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.550 "name": "Existed_Raid", 00:15:58.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.550 "strip_size_kb": 64, 00:15:58.550 "state": "configuring", 00:15:58.550 "raid_level": "raid5f", 00:15:58.550 "superblock": false, 00:15:58.550 "num_base_bdevs": 4, 00:15:58.550 "num_base_bdevs_discovered": 2, 00:15:58.550 "num_base_bdevs_operational": 4, 00:15:58.550 "base_bdevs_list": [ 00:15:58.550 { 00:15:58.550 "name": "BaseBdev1", 00:15:58.550 "uuid": "33aaeb19-7eff-4767-99e5-f80606ea6149", 00:15:58.550 "is_configured": true, 00:15:58.550 "data_offset": 0, 00:15:58.550 "data_size": 65536 00:15:58.550 }, 00:15:58.550 { 00:15:58.550 "name": null, 00:15:58.550 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:15:58.550 "is_configured": false, 00:15:58.550 "data_offset": 0, 00:15:58.550 "data_size": 65536 00:15:58.550 }, 00:15:58.550 { 00:15:58.550 "name": null, 00:15:58.550 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:15:58.550 "is_configured": false, 00:15:58.550 "data_offset": 0, 00:15:58.550 "data_size": 65536 00:15:58.550 }, 00:15:58.550 { 00:15:58.550 "name": "BaseBdev4", 00:15:58.550 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:15:58.550 "is_configured": true, 00:15:58.550 "data_offset": 0, 00:15:58.550 "data_size": 65536 00:15:58.550 } 00:15:58.550 ] 00:15:58.550 }' 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.550 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.810 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.810 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:58.810 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.810 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.070 [2024-09-29 21:47:17.827313] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.070 "name": "Existed_Raid", 00:15:59.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.070 "strip_size_kb": 64, 00:15:59.070 "state": "configuring", 00:15:59.070 "raid_level": "raid5f", 00:15:59.070 "superblock": false, 00:15:59.070 "num_base_bdevs": 4, 00:15:59.070 "num_base_bdevs_discovered": 3, 00:15:59.070 "num_base_bdevs_operational": 4, 00:15:59.070 "base_bdevs_list": [ 00:15:59.070 { 00:15:59.070 "name": "BaseBdev1", 00:15:59.070 "uuid": "33aaeb19-7eff-4767-99e5-f80606ea6149", 00:15:59.070 "is_configured": true, 00:15:59.070 "data_offset": 0, 00:15:59.070 "data_size": 65536 00:15:59.070 }, 00:15:59.070 { 00:15:59.070 "name": null, 00:15:59.070 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:15:59.070 "is_configured": false, 00:15:59.070 "data_offset": 0, 00:15:59.070 "data_size": 65536 00:15:59.070 }, 00:15:59.070 { 00:15:59.070 "name": "BaseBdev3", 00:15:59.070 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:15:59.070 "is_configured": true, 00:15:59.070 "data_offset": 0, 00:15:59.070 "data_size": 65536 00:15:59.070 }, 00:15:59.070 { 00:15:59.070 "name": "BaseBdev4", 00:15:59.070 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:15:59.070 "is_configured": true, 00:15:59.070 "data_offset": 0, 00:15:59.070 "data_size": 65536 00:15:59.070 } 00:15:59.070 ] 00:15:59.070 }' 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.070 21:47:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.330 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.330 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.330 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.330 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:59.330 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.590 [2024-09-29 21:47:18.330431] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.590 "name": "Existed_Raid", 00:15:59.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.590 "strip_size_kb": 64, 00:15:59.590 "state": "configuring", 00:15:59.590 "raid_level": "raid5f", 00:15:59.590 "superblock": false, 00:15:59.590 "num_base_bdevs": 4, 00:15:59.590 "num_base_bdevs_discovered": 2, 00:15:59.590 "num_base_bdevs_operational": 4, 00:15:59.590 "base_bdevs_list": [ 00:15:59.590 { 00:15:59.590 "name": null, 00:15:59.590 "uuid": "33aaeb19-7eff-4767-99e5-f80606ea6149", 00:15:59.590 "is_configured": false, 00:15:59.590 "data_offset": 0, 00:15:59.590 "data_size": 65536 00:15:59.590 }, 00:15:59.590 { 00:15:59.590 "name": null, 00:15:59.590 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:15:59.590 "is_configured": false, 00:15:59.590 "data_offset": 0, 00:15:59.590 "data_size": 65536 00:15:59.590 }, 00:15:59.590 { 00:15:59.590 "name": "BaseBdev3", 00:15:59.590 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:15:59.590 "is_configured": true, 00:15:59.590 "data_offset": 0, 00:15:59.590 "data_size": 65536 00:15:59.590 }, 00:15:59.590 { 00:15:59.590 "name": "BaseBdev4", 00:15:59.590 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:15:59.590 "is_configured": true, 00:15:59.590 "data_offset": 0, 00:15:59.590 "data_size": 65536 00:15:59.590 } 00:15:59.590 ] 00:15:59.590 }' 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.590 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.160 [2024-09-29 21:47:18.938346] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.160 "name": "Existed_Raid", 00:16:00.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.160 "strip_size_kb": 64, 00:16:00.160 "state": "configuring", 00:16:00.160 "raid_level": "raid5f", 00:16:00.160 "superblock": false, 00:16:00.160 "num_base_bdevs": 4, 00:16:00.160 "num_base_bdevs_discovered": 3, 00:16:00.160 "num_base_bdevs_operational": 4, 00:16:00.160 "base_bdevs_list": [ 00:16:00.160 { 00:16:00.160 "name": null, 00:16:00.160 "uuid": "33aaeb19-7eff-4767-99e5-f80606ea6149", 00:16:00.160 "is_configured": false, 00:16:00.160 "data_offset": 0, 00:16:00.160 "data_size": 65536 00:16:00.160 }, 00:16:00.160 { 00:16:00.160 "name": "BaseBdev2", 00:16:00.160 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:16:00.160 "is_configured": true, 00:16:00.160 "data_offset": 0, 00:16:00.160 "data_size": 65536 00:16:00.160 }, 00:16:00.160 { 00:16:00.160 "name": "BaseBdev3", 00:16:00.160 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:16:00.160 "is_configured": true, 00:16:00.160 "data_offset": 0, 00:16:00.160 "data_size": 65536 00:16:00.160 }, 00:16:00.160 { 00:16:00.160 "name": "BaseBdev4", 00:16:00.160 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:16:00.160 "is_configured": true, 00:16:00.160 "data_offset": 0, 00:16:00.160 "data_size": 65536 00:16:00.160 } 00:16:00.160 ] 00:16:00.160 }' 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.160 21:47:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.420 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:00.420 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.420 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.420 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.420 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.420 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 33aaeb19-7eff-4767-99e5-f80606ea6149 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.680 [2024-09-29 21:47:19.471230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:00.680 [2024-09-29 21:47:19.471280] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:00.680 [2024-09-29 21:47:19.471288] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:00.680 [2024-09-29 21:47:19.471526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:00.680 [2024-09-29 21:47:19.477616] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:00.680 [2024-09-29 21:47:19.477643] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:00.680 [2024-09-29 21:47:19.477871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.680 NewBaseBdev 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.680 [ 00:16:00.680 { 00:16:00.680 "name": "NewBaseBdev", 00:16:00.680 "aliases": [ 00:16:00.680 "33aaeb19-7eff-4767-99e5-f80606ea6149" 00:16:00.680 ], 00:16:00.680 "product_name": "Malloc disk", 00:16:00.680 "block_size": 512, 00:16:00.680 "num_blocks": 65536, 00:16:00.680 "uuid": "33aaeb19-7eff-4767-99e5-f80606ea6149", 00:16:00.680 "assigned_rate_limits": { 00:16:00.680 "rw_ios_per_sec": 0, 00:16:00.680 "rw_mbytes_per_sec": 0, 00:16:00.680 "r_mbytes_per_sec": 0, 00:16:00.680 "w_mbytes_per_sec": 0 00:16:00.680 }, 00:16:00.680 "claimed": true, 00:16:00.680 "claim_type": "exclusive_write", 00:16:00.680 "zoned": false, 00:16:00.680 "supported_io_types": { 00:16:00.680 "read": true, 00:16:00.680 "write": true, 00:16:00.680 "unmap": true, 00:16:00.680 "flush": true, 00:16:00.680 "reset": true, 00:16:00.680 "nvme_admin": false, 00:16:00.680 "nvme_io": false, 00:16:00.680 "nvme_io_md": false, 00:16:00.680 "write_zeroes": true, 00:16:00.680 "zcopy": true, 00:16:00.680 "get_zone_info": false, 00:16:00.680 "zone_management": false, 00:16:00.680 "zone_append": false, 00:16:00.680 "compare": false, 00:16:00.680 "compare_and_write": false, 00:16:00.680 "abort": true, 00:16:00.680 "seek_hole": false, 00:16:00.680 "seek_data": false, 00:16:00.680 "copy": true, 00:16:00.680 "nvme_iov_md": false 00:16:00.680 }, 00:16:00.680 "memory_domains": [ 00:16:00.680 { 00:16:00.680 "dma_device_id": "system", 00:16:00.680 "dma_device_type": 1 00:16:00.680 }, 00:16:00.680 { 00:16:00.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.680 "dma_device_type": 2 00:16:00.680 } 00:16:00.680 ], 00:16:00.680 "driver_specific": {} 00:16:00.680 } 00:16:00.680 ] 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.680 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.680 "name": "Existed_Raid", 00:16:00.680 "uuid": "7cc22524-2f99-41f7-8419-791a8b2d9d42", 00:16:00.680 "strip_size_kb": 64, 00:16:00.680 "state": "online", 00:16:00.680 "raid_level": "raid5f", 00:16:00.680 "superblock": false, 00:16:00.680 "num_base_bdevs": 4, 00:16:00.680 "num_base_bdevs_discovered": 4, 00:16:00.680 "num_base_bdevs_operational": 4, 00:16:00.680 "base_bdevs_list": [ 00:16:00.680 { 00:16:00.680 "name": "NewBaseBdev", 00:16:00.680 "uuid": "33aaeb19-7eff-4767-99e5-f80606ea6149", 00:16:00.680 "is_configured": true, 00:16:00.680 "data_offset": 0, 00:16:00.680 "data_size": 65536 00:16:00.680 }, 00:16:00.680 { 00:16:00.680 "name": "BaseBdev2", 00:16:00.680 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:16:00.680 "is_configured": true, 00:16:00.680 "data_offset": 0, 00:16:00.680 "data_size": 65536 00:16:00.680 }, 00:16:00.680 { 00:16:00.680 "name": "BaseBdev3", 00:16:00.680 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:16:00.680 "is_configured": true, 00:16:00.680 "data_offset": 0, 00:16:00.680 "data_size": 65536 00:16:00.680 }, 00:16:00.680 { 00:16:00.680 "name": "BaseBdev4", 00:16:00.681 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:16:00.681 "is_configured": true, 00:16:00.681 "data_offset": 0, 00:16:00.681 "data_size": 65536 00:16:00.681 } 00:16:00.681 ] 00:16:00.681 }' 00:16:00.681 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.681 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.248 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:01.248 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:01.248 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:01.248 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:01.248 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:01.248 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:01.248 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:01.249 21:47:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:01.249 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.249 21:47:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.249 [2024-09-29 21:47:19.996860] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:01.249 "name": "Existed_Raid", 00:16:01.249 "aliases": [ 00:16:01.249 "7cc22524-2f99-41f7-8419-791a8b2d9d42" 00:16:01.249 ], 00:16:01.249 "product_name": "Raid Volume", 00:16:01.249 "block_size": 512, 00:16:01.249 "num_blocks": 196608, 00:16:01.249 "uuid": "7cc22524-2f99-41f7-8419-791a8b2d9d42", 00:16:01.249 "assigned_rate_limits": { 00:16:01.249 "rw_ios_per_sec": 0, 00:16:01.249 "rw_mbytes_per_sec": 0, 00:16:01.249 "r_mbytes_per_sec": 0, 00:16:01.249 "w_mbytes_per_sec": 0 00:16:01.249 }, 00:16:01.249 "claimed": false, 00:16:01.249 "zoned": false, 00:16:01.249 "supported_io_types": { 00:16:01.249 "read": true, 00:16:01.249 "write": true, 00:16:01.249 "unmap": false, 00:16:01.249 "flush": false, 00:16:01.249 "reset": true, 00:16:01.249 "nvme_admin": false, 00:16:01.249 "nvme_io": false, 00:16:01.249 "nvme_io_md": false, 00:16:01.249 "write_zeroes": true, 00:16:01.249 "zcopy": false, 00:16:01.249 "get_zone_info": false, 00:16:01.249 "zone_management": false, 00:16:01.249 "zone_append": false, 00:16:01.249 "compare": false, 00:16:01.249 "compare_and_write": false, 00:16:01.249 "abort": false, 00:16:01.249 "seek_hole": false, 00:16:01.249 "seek_data": false, 00:16:01.249 "copy": false, 00:16:01.249 "nvme_iov_md": false 00:16:01.249 }, 00:16:01.249 "driver_specific": { 00:16:01.249 "raid": { 00:16:01.249 "uuid": "7cc22524-2f99-41f7-8419-791a8b2d9d42", 00:16:01.249 "strip_size_kb": 64, 00:16:01.249 "state": "online", 00:16:01.249 "raid_level": "raid5f", 00:16:01.249 "superblock": false, 00:16:01.249 "num_base_bdevs": 4, 00:16:01.249 "num_base_bdevs_discovered": 4, 00:16:01.249 "num_base_bdevs_operational": 4, 00:16:01.249 "base_bdevs_list": [ 00:16:01.249 { 00:16:01.249 "name": "NewBaseBdev", 00:16:01.249 "uuid": "33aaeb19-7eff-4767-99e5-f80606ea6149", 00:16:01.249 "is_configured": true, 00:16:01.249 "data_offset": 0, 00:16:01.249 "data_size": 65536 00:16:01.249 }, 00:16:01.249 { 00:16:01.249 "name": "BaseBdev2", 00:16:01.249 "uuid": "54da5845-8d47-4df7-999d-4bf0662f5e2b", 00:16:01.249 "is_configured": true, 00:16:01.249 "data_offset": 0, 00:16:01.249 "data_size": 65536 00:16:01.249 }, 00:16:01.249 { 00:16:01.249 "name": "BaseBdev3", 00:16:01.249 "uuid": "4c328f94-a809-46dc-bb11-222c58d6a5b7", 00:16:01.249 "is_configured": true, 00:16:01.249 "data_offset": 0, 00:16:01.249 "data_size": 65536 00:16:01.249 }, 00:16:01.249 { 00:16:01.249 "name": "BaseBdev4", 00:16:01.249 "uuid": "165c5566-0347-47bb-9596-4dfd6e536583", 00:16:01.249 "is_configured": true, 00:16:01.249 "data_offset": 0, 00:16:01.249 "data_size": 65536 00:16:01.249 } 00:16:01.249 ] 00:16:01.249 } 00:16:01.249 } 00:16:01.249 }' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:01.249 BaseBdev2 00:16:01.249 BaseBdev3 00:16:01.249 BaseBdev4' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.249 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.511 [2024-09-29 21:47:20.292218] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.511 [2024-09-29 21:47:20.292247] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.511 [2024-09-29 21:47:20.292311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.511 [2024-09-29 21:47:20.292568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.511 [2024-09-29 21:47:20.292586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82818 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 82818 ']' 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 82818 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82818 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.511 killing process with pid 82818 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82818' 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 82818 00:16:01.511 [2024-09-29 21:47:20.331268] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.511 21:47:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 82818 00:16:01.771 [2024-09-29 21:47:20.693706] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:03.150 00:16:03.150 real 0m11.479s 00:16:03.150 user 0m18.157s 00:16:03.150 sys 0m2.190s 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.150 ************************************ 00:16:03.150 END TEST raid5f_state_function_test 00:16:03.150 ************************************ 00:16:03.150 21:47:21 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:03.150 21:47:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:03.150 21:47:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:03.150 21:47:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.150 ************************************ 00:16:03.150 START TEST raid5f_state_function_test_sb 00:16:03.150 ************************************ 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83492 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:03.150 Process raid pid: 83492 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83492' 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83492 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83492 ']' 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.150 21:47:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.150 [2024-09-29 21:47:22.078885] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:03.150 [2024-09-29 21:47:22.079012] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.410 [2024-09-29 21:47:22.249101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.669 [2024-09-29 21:47:22.443592] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.669 [2024-09-29 21:47:22.612466] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.669 [2024-09-29 21:47:22.612505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.929 21:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:03.929 21:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:03.929 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:03.929 21:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.929 21:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.929 [2024-09-29 21:47:22.910128] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.929 [2024-09-29 21:47:22.910178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.929 [2024-09-29 21:47:22.910187] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.929 [2024-09-29 21:47:22.910196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.929 [2024-09-29 21:47:22.910202] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.929 [2024-09-29 21:47:22.910212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.929 [2024-09-29 21:47:22.910218] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.929 [2024-09-29 21:47:22.910226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.188 "name": "Existed_Raid", 00:16:04.188 "uuid": "04500f88-b0e8-43f3-823c-4c611142b44a", 00:16:04.188 "strip_size_kb": 64, 00:16:04.188 "state": "configuring", 00:16:04.188 "raid_level": "raid5f", 00:16:04.188 "superblock": true, 00:16:04.188 "num_base_bdevs": 4, 00:16:04.188 "num_base_bdevs_discovered": 0, 00:16:04.188 "num_base_bdevs_operational": 4, 00:16:04.188 "base_bdevs_list": [ 00:16:04.188 { 00:16:04.188 "name": "BaseBdev1", 00:16:04.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.188 "is_configured": false, 00:16:04.188 "data_offset": 0, 00:16:04.188 "data_size": 0 00:16:04.188 }, 00:16:04.188 { 00:16:04.188 "name": "BaseBdev2", 00:16:04.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.188 "is_configured": false, 00:16:04.188 "data_offset": 0, 00:16:04.188 "data_size": 0 00:16:04.188 }, 00:16:04.188 { 00:16:04.188 "name": "BaseBdev3", 00:16:04.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.188 "is_configured": false, 00:16:04.188 "data_offset": 0, 00:16:04.188 "data_size": 0 00:16:04.188 }, 00:16:04.188 { 00:16:04.188 "name": "BaseBdev4", 00:16:04.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.188 "is_configured": false, 00:16:04.188 "data_offset": 0, 00:16:04.188 "data_size": 0 00:16:04.188 } 00:16:04.188 ] 00:16:04.188 }' 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.188 21:47:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.447 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.447 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.447 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.447 [2024-09-29 21:47:23.369216] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.447 [2024-09-29 21:47:23.369256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:04.448 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.448 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.448 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.448 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.448 [2024-09-29 21:47:23.381225] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:04.448 [2024-09-29 21:47:23.381262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:04.448 [2024-09-29 21:47:23.381270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.448 [2024-09-29 21:47:23.381279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.448 [2024-09-29 21:47:23.381285] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.448 [2024-09-29 21:47:23.381293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.448 [2024-09-29 21:47:23.381298] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.448 [2024-09-29 21:47:23.381307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.448 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.448 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.448 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.448 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.708 [2024-09-29 21:47:23.458261] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.708 BaseBdev1 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.708 [ 00:16:04.708 { 00:16:04.708 "name": "BaseBdev1", 00:16:04.708 "aliases": [ 00:16:04.708 "b97d0c25-6940-4ec0-931f-a623ce764dbb" 00:16:04.708 ], 00:16:04.708 "product_name": "Malloc disk", 00:16:04.708 "block_size": 512, 00:16:04.708 "num_blocks": 65536, 00:16:04.708 "uuid": "b97d0c25-6940-4ec0-931f-a623ce764dbb", 00:16:04.708 "assigned_rate_limits": { 00:16:04.708 "rw_ios_per_sec": 0, 00:16:04.708 "rw_mbytes_per_sec": 0, 00:16:04.708 "r_mbytes_per_sec": 0, 00:16:04.708 "w_mbytes_per_sec": 0 00:16:04.708 }, 00:16:04.708 "claimed": true, 00:16:04.708 "claim_type": "exclusive_write", 00:16:04.708 "zoned": false, 00:16:04.708 "supported_io_types": { 00:16:04.708 "read": true, 00:16:04.708 "write": true, 00:16:04.708 "unmap": true, 00:16:04.708 "flush": true, 00:16:04.708 "reset": true, 00:16:04.708 "nvme_admin": false, 00:16:04.708 "nvme_io": false, 00:16:04.708 "nvme_io_md": false, 00:16:04.708 "write_zeroes": true, 00:16:04.708 "zcopy": true, 00:16:04.708 "get_zone_info": false, 00:16:04.708 "zone_management": false, 00:16:04.708 "zone_append": false, 00:16:04.708 "compare": false, 00:16:04.708 "compare_and_write": false, 00:16:04.708 "abort": true, 00:16:04.708 "seek_hole": false, 00:16:04.708 "seek_data": false, 00:16:04.708 "copy": true, 00:16:04.708 "nvme_iov_md": false 00:16:04.708 }, 00:16:04.708 "memory_domains": [ 00:16:04.708 { 00:16:04.708 "dma_device_id": "system", 00:16:04.708 "dma_device_type": 1 00:16:04.708 }, 00:16:04.708 { 00:16:04.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.708 "dma_device_type": 2 00:16:04.708 } 00:16:04.708 ], 00:16:04.708 "driver_specific": {} 00:16:04.708 } 00:16:04.708 ] 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.708 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.708 "name": "Existed_Raid", 00:16:04.708 "uuid": "1321f750-af74-4a14-947f-0915fee3c71b", 00:16:04.708 "strip_size_kb": 64, 00:16:04.708 "state": "configuring", 00:16:04.708 "raid_level": "raid5f", 00:16:04.708 "superblock": true, 00:16:04.708 "num_base_bdevs": 4, 00:16:04.708 "num_base_bdevs_discovered": 1, 00:16:04.708 "num_base_bdevs_operational": 4, 00:16:04.708 "base_bdevs_list": [ 00:16:04.709 { 00:16:04.709 "name": "BaseBdev1", 00:16:04.709 "uuid": "b97d0c25-6940-4ec0-931f-a623ce764dbb", 00:16:04.709 "is_configured": true, 00:16:04.709 "data_offset": 2048, 00:16:04.709 "data_size": 63488 00:16:04.709 }, 00:16:04.709 { 00:16:04.709 "name": "BaseBdev2", 00:16:04.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.709 "is_configured": false, 00:16:04.709 "data_offset": 0, 00:16:04.709 "data_size": 0 00:16:04.709 }, 00:16:04.709 { 00:16:04.709 "name": "BaseBdev3", 00:16:04.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.709 "is_configured": false, 00:16:04.709 "data_offset": 0, 00:16:04.709 "data_size": 0 00:16:04.709 }, 00:16:04.709 { 00:16:04.709 "name": "BaseBdev4", 00:16:04.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.709 "is_configured": false, 00:16:04.709 "data_offset": 0, 00:16:04.709 "data_size": 0 00:16:04.709 } 00:16:04.709 ] 00:16:04.709 }' 00:16:04.709 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.709 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.969 [2024-09-29 21:47:23.913517] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.969 [2024-09-29 21:47:23.913559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.969 [2024-09-29 21:47:23.925549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.969 [2024-09-29 21:47:23.927150] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.969 [2024-09-29 21:47:23.927188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.969 [2024-09-29 21:47:23.927197] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.969 [2024-09-29 21:47:23.927207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.969 [2024-09-29 21:47:23.927213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:04.969 [2024-09-29 21:47:23.927221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.969 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.228 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.228 "name": "Existed_Raid", 00:16:05.228 "uuid": "15c0917c-ce4a-43eb-8649-55edda444230", 00:16:05.228 "strip_size_kb": 64, 00:16:05.228 "state": "configuring", 00:16:05.228 "raid_level": "raid5f", 00:16:05.228 "superblock": true, 00:16:05.228 "num_base_bdevs": 4, 00:16:05.228 "num_base_bdevs_discovered": 1, 00:16:05.228 "num_base_bdevs_operational": 4, 00:16:05.228 "base_bdevs_list": [ 00:16:05.228 { 00:16:05.228 "name": "BaseBdev1", 00:16:05.228 "uuid": "b97d0c25-6940-4ec0-931f-a623ce764dbb", 00:16:05.228 "is_configured": true, 00:16:05.228 "data_offset": 2048, 00:16:05.228 "data_size": 63488 00:16:05.228 }, 00:16:05.228 { 00:16:05.228 "name": "BaseBdev2", 00:16:05.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.228 "is_configured": false, 00:16:05.228 "data_offset": 0, 00:16:05.228 "data_size": 0 00:16:05.228 }, 00:16:05.228 { 00:16:05.228 "name": "BaseBdev3", 00:16:05.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.228 "is_configured": false, 00:16:05.228 "data_offset": 0, 00:16:05.228 "data_size": 0 00:16:05.228 }, 00:16:05.228 { 00:16:05.228 "name": "BaseBdev4", 00:16:05.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.228 "is_configured": false, 00:16:05.228 "data_offset": 0, 00:16:05.228 "data_size": 0 00:16:05.228 } 00:16:05.228 ] 00:16:05.228 }' 00:16:05.228 21:47:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.228 21:47:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.488 [2024-09-29 21:47:24.389504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.488 BaseBdev2 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.488 [ 00:16:05.488 { 00:16:05.488 "name": "BaseBdev2", 00:16:05.488 "aliases": [ 00:16:05.488 "cea88946-34d8-4c16-928a-a1556ef23e38" 00:16:05.488 ], 00:16:05.488 "product_name": "Malloc disk", 00:16:05.488 "block_size": 512, 00:16:05.488 "num_blocks": 65536, 00:16:05.488 "uuid": "cea88946-34d8-4c16-928a-a1556ef23e38", 00:16:05.488 "assigned_rate_limits": { 00:16:05.488 "rw_ios_per_sec": 0, 00:16:05.488 "rw_mbytes_per_sec": 0, 00:16:05.488 "r_mbytes_per_sec": 0, 00:16:05.488 "w_mbytes_per_sec": 0 00:16:05.488 }, 00:16:05.488 "claimed": true, 00:16:05.488 "claim_type": "exclusive_write", 00:16:05.488 "zoned": false, 00:16:05.488 "supported_io_types": { 00:16:05.488 "read": true, 00:16:05.488 "write": true, 00:16:05.488 "unmap": true, 00:16:05.488 "flush": true, 00:16:05.488 "reset": true, 00:16:05.488 "nvme_admin": false, 00:16:05.488 "nvme_io": false, 00:16:05.488 "nvme_io_md": false, 00:16:05.488 "write_zeroes": true, 00:16:05.488 "zcopy": true, 00:16:05.488 "get_zone_info": false, 00:16:05.488 "zone_management": false, 00:16:05.488 "zone_append": false, 00:16:05.488 "compare": false, 00:16:05.488 "compare_and_write": false, 00:16:05.488 "abort": true, 00:16:05.488 "seek_hole": false, 00:16:05.488 "seek_data": false, 00:16:05.488 "copy": true, 00:16:05.488 "nvme_iov_md": false 00:16:05.488 }, 00:16:05.488 "memory_domains": [ 00:16:05.488 { 00:16:05.488 "dma_device_id": "system", 00:16:05.488 "dma_device_type": 1 00:16:05.488 }, 00:16:05.488 { 00:16:05.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.488 "dma_device_type": 2 00:16:05.488 } 00:16:05.488 ], 00:16:05.488 "driver_specific": {} 00:16:05.488 } 00:16:05.488 ] 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.488 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.489 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.748 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.748 "name": "Existed_Raid", 00:16:05.748 "uuid": "15c0917c-ce4a-43eb-8649-55edda444230", 00:16:05.748 "strip_size_kb": 64, 00:16:05.748 "state": "configuring", 00:16:05.748 "raid_level": "raid5f", 00:16:05.748 "superblock": true, 00:16:05.748 "num_base_bdevs": 4, 00:16:05.748 "num_base_bdevs_discovered": 2, 00:16:05.748 "num_base_bdevs_operational": 4, 00:16:05.748 "base_bdevs_list": [ 00:16:05.748 { 00:16:05.748 "name": "BaseBdev1", 00:16:05.748 "uuid": "b97d0c25-6940-4ec0-931f-a623ce764dbb", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 2048, 00:16:05.748 "data_size": 63488 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "BaseBdev2", 00:16:05.748 "uuid": "cea88946-34d8-4c16-928a-a1556ef23e38", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 2048, 00:16:05.748 "data_size": 63488 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "BaseBdev3", 00:16:05.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.748 "is_configured": false, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 0 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "BaseBdev4", 00:16:05.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.748 "is_configured": false, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 0 00:16:05.748 } 00:16:05.748 ] 00:16:05.748 }' 00:16:05.748 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.748 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.008 [2024-09-29 21:47:24.926937] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.008 BaseBdev3 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.008 [ 00:16:06.008 { 00:16:06.008 "name": "BaseBdev3", 00:16:06.008 "aliases": [ 00:16:06.008 "1e312e4c-0ee9-4225-a947-5491f44ad99e" 00:16:06.008 ], 00:16:06.008 "product_name": "Malloc disk", 00:16:06.008 "block_size": 512, 00:16:06.008 "num_blocks": 65536, 00:16:06.008 "uuid": "1e312e4c-0ee9-4225-a947-5491f44ad99e", 00:16:06.008 "assigned_rate_limits": { 00:16:06.008 "rw_ios_per_sec": 0, 00:16:06.008 "rw_mbytes_per_sec": 0, 00:16:06.008 "r_mbytes_per_sec": 0, 00:16:06.008 "w_mbytes_per_sec": 0 00:16:06.008 }, 00:16:06.008 "claimed": true, 00:16:06.008 "claim_type": "exclusive_write", 00:16:06.008 "zoned": false, 00:16:06.008 "supported_io_types": { 00:16:06.008 "read": true, 00:16:06.008 "write": true, 00:16:06.008 "unmap": true, 00:16:06.008 "flush": true, 00:16:06.008 "reset": true, 00:16:06.008 "nvme_admin": false, 00:16:06.008 "nvme_io": false, 00:16:06.008 "nvme_io_md": false, 00:16:06.008 "write_zeroes": true, 00:16:06.008 "zcopy": true, 00:16:06.008 "get_zone_info": false, 00:16:06.008 "zone_management": false, 00:16:06.008 "zone_append": false, 00:16:06.008 "compare": false, 00:16:06.008 "compare_and_write": false, 00:16:06.008 "abort": true, 00:16:06.008 "seek_hole": false, 00:16:06.008 "seek_data": false, 00:16:06.008 "copy": true, 00:16:06.008 "nvme_iov_md": false 00:16:06.008 }, 00:16:06.008 "memory_domains": [ 00:16:06.008 { 00:16:06.008 "dma_device_id": "system", 00:16:06.008 "dma_device_type": 1 00:16:06.008 }, 00:16:06.008 { 00:16:06.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.008 "dma_device_type": 2 00:16:06.008 } 00:16:06.008 ], 00:16:06.008 "driver_specific": {} 00:16:06.008 } 00:16:06.008 ] 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.008 21:47:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.268 21:47:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.268 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.268 "name": "Existed_Raid", 00:16:06.268 "uuid": "15c0917c-ce4a-43eb-8649-55edda444230", 00:16:06.268 "strip_size_kb": 64, 00:16:06.268 "state": "configuring", 00:16:06.268 "raid_level": "raid5f", 00:16:06.268 "superblock": true, 00:16:06.268 "num_base_bdevs": 4, 00:16:06.268 "num_base_bdevs_discovered": 3, 00:16:06.268 "num_base_bdevs_operational": 4, 00:16:06.268 "base_bdevs_list": [ 00:16:06.268 { 00:16:06.268 "name": "BaseBdev1", 00:16:06.268 "uuid": "b97d0c25-6940-4ec0-931f-a623ce764dbb", 00:16:06.268 "is_configured": true, 00:16:06.268 "data_offset": 2048, 00:16:06.268 "data_size": 63488 00:16:06.268 }, 00:16:06.268 { 00:16:06.268 "name": "BaseBdev2", 00:16:06.268 "uuid": "cea88946-34d8-4c16-928a-a1556ef23e38", 00:16:06.268 "is_configured": true, 00:16:06.268 "data_offset": 2048, 00:16:06.268 "data_size": 63488 00:16:06.268 }, 00:16:06.268 { 00:16:06.268 "name": "BaseBdev3", 00:16:06.268 "uuid": "1e312e4c-0ee9-4225-a947-5491f44ad99e", 00:16:06.268 "is_configured": true, 00:16:06.268 "data_offset": 2048, 00:16:06.268 "data_size": 63488 00:16:06.268 }, 00:16:06.268 { 00:16:06.268 "name": "BaseBdev4", 00:16:06.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.268 "is_configured": false, 00:16:06.268 "data_offset": 0, 00:16:06.268 "data_size": 0 00:16:06.268 } 00:16:06.268 ] 00:16:06.268 }' 00:16:06.268 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.268 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.528 [2024-09-29 21:47:25.451222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.528 [2024-09-29 21:47:25.451488] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:06.528 [2024-09-29 21:47:25.451510] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:06.528 [2024-09-29 21:47:25.451747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:06.528 BaseBdev4 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.528 [2024-09-29 21:47:25.458486] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:06.528 [2024-09-29 21:47:25.458512] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:06.528 [2024-09-29 21:47:25.458742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.528 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.528 [ 00:16:06.528 { 00:16:06.528 "name": "BaseBdev4", 00:16:06.528 "aliases": [ 00:16:06.528 "3eaf9ecd-e22c-4df1-abbf-5e78c224c85e" 00:16:06.528 ], 00:16:06.528 "product_name": "Malloc disk", 00:16:06.528 "block_size": 512, 00:16:06.528 "num_blocks": 65536, 00:16:06.528 "uuid": "3eaf9ecd-e22c-4df1-abbf-5e78c224c85e", 00:16:06.528 "assigned_rate_limits": { 00:16:06.528 "rw_ios_per_sec": 0, 00:16:06.528 "rw_mbytes_per_sec": 0, 00:16:06.528 "r_mbytes_per_sec": 0, 00:16:06.528 "w_mbytes_per_sec": 0 00:16:06.529 }, 00:16:06.529 "claimed": true, 00:16:06.529 "claim_type": "exclusive_write", 00:16:06.529 "zoned": false, 00:16:06.529 "supported_io_types": { 00:16:06.529 "read": true, 00:16:06.529 "write": true, 00:16:06.529 "unmap": true, 00:16:06.529 "flush": true, 00:16:06.529 "reset": true, 00:16:06.529 "nvme_admin": false, 00:16:06.529 "nvme_io": false, 00:16:06.529 "nvme_io_md": false, 00:16:06.529 "write_zeroes": true, 00:16:06.529 "zcopy": true, 00:16:06.529 "get_zone_info": false, 00:16:06.529 "zone_management": false, 00:16:06.529 "zone_append": false, 00:16:06.529 "compare": false, 00:16:06.529 "compare_and_write": false, 00:16:06.529 "abort": true, 00:16:06.529 "seek_hole": false, 00:16:06.529 "seek_data": false, 00:16:06.529 "copy": true, 00:16:06.529 "nvme_iov_md": false 00:16:06.529 }, 00:16:06.529 "memory_domains": [ 00:16:06.529 { 00:16:06.529 "dma_device_id": "system", 00:16:06.529 "dma_device_type": 1 00:16:06.529 }, 00:16:06.529 { 00:16:06.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.529 "dma_device_type": 2 00:16:06.529 } 00:16:06.529 ], 00:16:06.529 "driver_specific": {} 00:16:06.529 } 00:16:06.529 ] 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.529 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.789 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.789 "name": "Existed_Raid", 00:16:06.789 "uuid": "15c0917c-ce4a-43eb-8649-55edda444230", 00:16:06.789 "strip_size_kb": 64, 00:16:06.789 "state": "online", 00:16:06.789 "raid_level": "raid5f", 00:16:06.789 "superblock": true, 00:16:06.789 "num_base_bdevs": 4, 00:16:06.789 "num_base_bdevs_discovered": 4, 00:16:06.789 "num_base_bdevs_operational": 4, 00:16:06.789 "base_bdevs_list": [ 00:16:06.789 { 00:16:06.789 "name": "BaseBdev1", 00:16:06.789 "uuid": "b97d0c25-6940-4ec0-931f-a623ce764dbb", 00:16:06.789 "is_configured": true, 00:16:06.789 "data_offset": 2048, 00:16:06.789 "data_size": 63488 00:16:06.789 }, 00:16:06.789 { 00:16:06.789 "name": "BaseBdev2", 00:16:06.789 "uuid": "cea88946-34d8-4c16-928a-a1556ef23e38", 00:16:06.789 "is_configured": true, 00:16:06.789 "data_offset": 2048, 00:16:06.789 "data_size": 63488 00:16:06.789 }, 00:16:06.789 { 00:16:06.789 "name": "BaseBdev3", 00:16:06.789 "uuid": "1e312e4c-0ee9-4225-a947-5491f44ad99e", 00:16:06.789 "is_configured": true, 00:16:06.789 "data_offset": 2048, 00:16:06.789 "data_size": 63488 00:16:06.789 }, 00:16:06.789 { 00:16:06.789 "name": "BaseBdev4", 00:16:06.789 "uuid": "3eaf9ecd-e22c-4df1-abbf-5e78c224c85e", 00:16:06.789 "is_configured": true, 00:16:06.789 "data_offset": 2048, 00:16:06.789 "data_size": 63488 00:16:06.789 } 00:16:06.789 ] 00:16:06.789 }' 00:16:06.789 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.789 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.049 [2024-09-29 21:47:25.917620] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.049 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:07.049 "name": "Existed_Raid", 00:16:07.049 "aliases": [ 00:16:07.049 "15c0917c-ce4a-43eb-8649-55edda444230" 00:16:07.049 ], 00:16:07.049 "product_name": "Raid Volume", 00:16:07.049 "block_size": 512, 00:16:07.049 "num_blocks": 190464, 00:16:07.049 "uuid": "15c0917c-ce4a-43eb-8649-55edda444230", 00:16:07.049 "assigned_rate_limits": { 00:16:07.049 "rw_ios_per_sec": 0, 00:16:07.049 "rw_mbytes_per_sec": 0, 00:16:07.049 "r_mbytes_per_sec": 0, 00:16:07.049 "w_mbytes_per_sec": 0 00:16:07.049 }, 00:16:07.049 "claimed": false, 00:16:07.049 "zoned": false, 00:16:07.049 "supported_io_types": { 00:16:07.049 "read": true, 00:16:07.049 "write": true, 00:16:07.049 "unmap": false, 00:16:07.049 "flush": false, 00:16:07.049 "reset": true, 00:16:07.049 "nvme_admin": false, 00:16:07.049 "nvme_io": false, 00:16:07.049 "nvme_io_md": false, 00:16:07.049 "write_zeroes": true, 00:16:07.049 "zcopy": false, 00:16:07.049 "get_zone_info": false, 00:16:07.049 "zone_management": false, 00:16:07.049 "zone_append": false, 00:16:07.049 "compare": false, 00:16:07.049 "compare_and_write": false, 00:16:07.049 "abort": false, 00:16:07.049 "seek_hole": false, 00:16:07.049 "seek_data": false, 00:16:07.049 "copy": false, 00:16:07.049 "nvme_iov_md": false 00:16:07.049 }, 00:16:07.049 "driver_specific": { 00:16:07.049 "raid": { 00:16:07.049 "uuid": "15c0917c-ce4a-43eb-8649-55edda444230", 00:16:07.049 "strip_size_kb": 64, 00:16:07.049 "state": "online", 00:16:07.049 "raid_level": "raid5f", 00:16:07.049 "superblock": true, 00:16:07.049 "num_base_bdevs": 4, 00:16:07.049 "num_base_bdevs_discovered": 4, 00:16:07.049 "num_base_bdevs_operational": 4, 00:16:07.049 "base_bdevs_list": [ 00:16:07.049 { 00:16:07.049 "name": "BaseBdev1", 00:16:07.049 "uuid": "b97d0c25-6940-4ec0-931f-a623ce764dbb", 00:16:07.049 "is_configured": true, 00:16:07.049 "data_offset": 2048, 00:16:07.050 "data_size": 63488 00:16:07.050 }, 00:16:07.050 { 00:16:07.050 "name": "BaseBdev2", 00:16:07.050 "uuid": "cea88946-34d8-4c16-928a-a1556ef23e38", 00:16:07.050 "is_configured": true, 00:16:07.050 "data_offset": 2048, 00:16:07.050 "data_size": 63488 00:16:07.050 }, 00:16:07.050 { 00:16:07.050 "name": "BaseBdev3", 00:16:07.050 "uuid": "1e312e4c-0ee9-4225-a947-5491f44ad99e", 00:16:07.050 "is_configured": true, 00:16:07.050 "data_offset": 2048, 00:16:07.050 "data_size": 63488 00:16:07.050 }, 00:16:07.050 { 00:16:07.050 "name": "BaseBdev4", 00:16:07.050 "uuid": "3eaf9ecd-e22c-4df1-abbf-5e78c224c85e", 00:16:07.050 "is_configured": true, 00:16:07.050 "data_offset": 2048, 00:16:07.050 "data_size": 63488 00:16:07.050 } 00:16:07.050 ] 00:16:07.050 } 00:16:07.050 } 00:16:07.050 }' 00:16:07.050 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:07.050 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:07.050 BaseBdev2 00:16:07.050 BaseBdev3 00:16:07.050 BaseBdev4' 00:16:07.050 21:47:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.310 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.310 [2024-09-29 21:47:26.232942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.570 "name": "Existed_Raid", 00:16:07.570 "uuid": "15c0917c-ce4a-43eb-8649-55edda444230", 00:16:07.570 "strip_size_kb": 64, 00:16:07.570 "state": "online", 00:16:07.570 "raid_level": "raid5f", 00:16:07.570 "superblock": true, 00:16:07.570 "num_base_bdevs": 4, 00:16:07.570 "num_base_bdevs_discovered": 3, 00:16:07.570 "num_base_bdevs_operational": 3, 00:16:07.570 "base_bdevs_list": [ 00:16:07.570 { 00:16:07.570 "name": null, 00:16:07.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.570 "is_configured": false, 00:16:07.570 "data_offset": 0, 00:16:07.570 "data_size": 63488 00:16:07.570 }, 00:16:07.570 { 00:16:07.570 "name": "BaseBdev2", 00:16:07.570 "uuid": "cea88946-34d8-4c16-928a-a1556ef23e38", 00:16:07.570 "is_configured": true, 00:16:07.570 "data_offset": 2048, 00:16:07.570 "data_size": 63488 00:16:07.570 }, 00:16:07.570 { 00:16:07.570 "name": "BaseBdev3", 00:16:07.570 "uuid": "1e312e4c-0ee9-4225-a947-5491f44ad99e", 00:16:07.570 "is_configured": true, 00:16:07.570 "data_offset": 2048, 00:16:07.570 "data_size": 63488 00:16:07.570 }, 00:16:07.570 { 00:16:07.570 "name": "BaseBdev4", 00:16:07.570 "uuid": "3eaf9ecd-e22c-4df1-abbf-5e78c224c85e", 00:16:07.570 "is_configured": true, 00:16:07.570 "data_offset": 2048, 00:16:07.570 "data_size": 63488 00:16:07.570 } 00:16:07.570 ] 00:16:07.570 }' 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.570 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.830 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:07.830 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.090 [2024-09-29 21:47:26.852868] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:08.090 [2024-09-29 21:47:26.853024] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.090 [2024-09-29 21:47:26.942181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.090 21:47:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.090 [2024-09-29 21:47:26.986153] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.350 [2024-09-29 21:47:27.127833] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:08.350 [2024-09-29 21:47:27.127885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.350 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.351 BaseBdev2 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.351 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.351 [ 00:16:08.351 { 00:16:08.351 "name": "BaseBdev2", 00:16:08.351 "aliases": [ 00:16:08.351 "85d75ba9-ed29-41ed-93ee-e964df4f3f77" 00:16:08.351 ], 00:16:08.351 "product_name": "Malloc disk", 00:16:08.612 "block_size": 512, 00:16:08.612 "num_blocks": 65536, 00:16:08.612 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:08.612 "assigned_rate_limits": { 00:16:08.612 "rw_ios_per_sec": 0, 00:16:08.612 "rw_mbytes_per_sec": 0, 00:16:08.612 "r_mbytes_per_sec": 0, 00:16:08.612 "w_mbytes_per_sec": 0 00:16:08.612 }, 00:16:08.612 "claimed": false, 00:16:08.612 "zoned": false, 00:16:08.612 "supported_io_types": { 00:16:08.612 "read": true, 00:16:08.612 "write": true, 00:16:08.612 "unmap": true, 00:16:08.612 "flush": true, 00:16:08.612 "reset": true, 00:16:08.612 "nvme_admin": false, 00:16:08.612 "nvme_io": false, 00:16:08.612 "nvme_io_md": false, 00:16:08.612 "write_zeroes": true, 00:16:08.612 "zcopy": true, 00:16:08.612 "get_zone_info": false, 00:16:08.612 "zone_management": false, 00:16:08.612 "zone_append": false, 00:16:08.612 "compare": false, 00:16:08.612 "compare_and_write": false, 00:16:08.612 "abort": true, 00:16:08.612 "seek_hole": false, 00:16:08.612 "seek_data": false, 00:16:08.612 "copy": true, 00:16:08.612 "nvme_iov_md": false 00:16:08.612 }, 00:16:08.612 "memory_domains": [ 00:16:08.612 { 00:16:08.612 "dma_device_id": "system", 00:16:08.612 "dma_device_type": 1 00:16:08.612 }, 00:16:08.612 { 00:16:08.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.612 "dma_device_type": 2 00:16:08.612 } 00:16:08.612 ], 00:16:08.612 "driver_specific": {} 00:16:08.612 } 00:16:08.612 ] 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.612 BaseBdev3 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.612 [ 00:16:08.612 { 00:16:08.612 "name": "BaseBdev3", 00:16:08.612 "aliases": [ 00:16:08.612 "86426098-5992-4aed-9e4d-c779e1841fa1" 00:16:08.612 ], 00:16:08.612 "product_name": "Malloc disk", 00:16:08.612 "block_size": 512, 00:16:08.612 "num_blocks": 65536, 00:16:08.612 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:08.612 "assigned_rate_limits": { 00:16:08.612 "rw_ios_per_sec": 0, 00:16:08.612 "rw_mbytes_per_sec": 0, 00:16:08.612 "r_mbytes_per_sec": 0, 00:16:08.612 "w_mbytes_per_sec": 0 00:16:08.612 }, 00:16:08.612 "claimed": false, 00:16:08.612 "zoned": false, 00:16:08.612 "supported_io_types": { 00:16:08.612 "read": true, 00:16:08.612 "write": true, 00:16:08.612 "unmap": true, 00:16:08.612 "flush": true, 00:16:08.612 "reset": true, 00:16:08.612 "nvme_admin": false, 00:16:08.612 "nvme_io": false, 00:16:08.612 "nvme_io_md": false, 00:16:08.612 "write_zeroes": true, 00:16:08.612 "zcopy": true, 00:16:08.612 "get_zone_info": false, 00:16:08.612 "zone_management": false, 00:16:08.612 "zone_append": false, 00:16:08.612 "compare": false, 00:16:08.612 "compare_and_write": false, 00:16:08.612 "abort": true, 00:16:08.612 "seek_hole": false, 00:16:08.612 "seek_data": false, 00:16:08.612 "copy": true, 00:16:08.612 "nvme_iov_md": false 00:16:08.612 }, 00:16:08.612 "memory_domains": [ 00:16:08.612 { 00:16:08.612 "dma_device_id": "system", 00:16:08.612 "dma_device_type": 1 00:16:08.612 }, 00:16:08.612 { 00:16:08.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.612 "dma_device_type": 2 00:16:08.612 } 00:16:08.612 ], 00:16:08.612 "driver_specific": {} 00:16:08.612 } 00:16:08.612 ] 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:08.612 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.613 BaseBdev4 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.613 [ 00:16:08.613 { 00:16:08.613 "name": "BaseBdev4", 00:16:08.613 "aliases": [ 00:16:08.613 "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2" 00:16:08.613 ], 00:16:08.613 "product_name": "Malloc disk", 00:16:08.613 "block_size": 512, 00:16:08.613 "num_blocks": 65536, 00:16:08.613 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:08.613 "assigned_rate_limits": { 00:16:08.613 "rw_ios_per_sec": 0, 00:16:08.613 "rw_mbytes_per_sec": 0, 00:16:08.613 "r_mbytes_per_sec": 0, 00:16:08.613 "w_mbytes_per_sec": 0 00:16:08.613 }, 00:16:08.613 "claimed": false, 00:16:08.613 "zoned": false, 00:16:08.613 "supported_io_types": { 00:16:08.613 "read": true, 00:16:08.613 "write": true, 00:16:08.613 "unmap": true, 00:16:08.613 "flush": true, 00:16:08.613 "reset": true, 00:16:08.613 "nvme_admin": false, 00:16:08.613 "nvme_io": false, 00:16:08.613 "nvme_io_md": false, 00:16:08.613 "write_zeroes": true, 00:16:08.613 "zcopy": true, 00:16:08.613 "get_zone_info": false, 00:16:08.613 "zone_management": false, 00:16:08.613 "zone_append": false, 00:16:08.613 "compare": false, 00:16:08.613 "compare_and_write": false, 00:16:08.613 "abort": true, 00:16:08.613 "seek_hole": false, 00:16:08.613 "seek_data": false, 00:16:08.613 "copy": true, 00:16:08.613 "nvme_iov_md": false 00:16:08.613 }, 00:16:08.613 "memory_domains": [ 00:16:08.613 { 00:16:08.613 "dma_device_id": "system", 00:16:08.613 "dma_device_type": 1 00:16:08.613 }, 00:16:08.613 { 00:16:08.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.613 "dma_device_type": 2 00:16:08.613 } 00:16:08.613 ], 00:16:08.613 "driver_specific": {} 00:16:08.613 } 00:16:08.613 ] 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.613 [2024-09-29 21:47:27.504942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.613 [2024-09-29 21:47:27.504994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.613 [2024-09-29 21:47:27.505014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.613 [2024-09-29 21:47:27.506644] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.613 [2024-09-29 21:47:27.506698] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.613 "name": "Existed_Raid", 00:16:08.613 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:08.613 "strip_size_kb": 64, 00:16:08.613 "state": "configuring", 00:16:08.613 "raid_level": "raid5f", 00:16:08.613 "superblock": true, 00:16:08.613 "num_base_bdevs": 4, 00:16:08.613 "num_base_bdevs_discovered": 3, 00:16:08.613 "num_base_bdevs_operational": 4, 00:16:08.613 "base_bdevs_list": [ 00:16:08.613 { 00:16:08.613 "name": "BaseBdev1", 00:16:08.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.613 "is_configured": false, 00:16:08.613 "data_offset": 0, 00:16:08.613 "data_size": 0 00:16:08.613 }, 00:16:08.613 { 00:16:08.613 "name": "BaseBdev2", 00:16:08.613 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:08.613 "is_configured": true, 00:16:08.613 "data_offset": 2048, 00:16:08.613 "data_size": 63488 00:16:08.613 }, 00:16:08.613 { 00:16:08.613 "name": "BaseBdev3", 00:16:08.613 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:08.613 "is_configured": true, 00:16:08.613 "data_offset": 2048, 00:16:08.613 "data_size": 63488 00:16:08.613 }, 00:16:08.613 { 00:16:08.613 "name": "BaseBdev4", 00:16:08.613 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:08.613 "is_configured": true, 00:16:08.613 "data_offset": 2048, 00:16:08.613 "data_size": 63488 00:16:08.613 } 00:16:08.613 ] 00:16:08.613 }' 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.613 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.183 [2024-09-29 21:47:27.920250] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.183 "name": "Existed_Raid", 00:16:09.183 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:09.183 "strip_size_kb": 64, 00:16:09.183 "state": "configuring", 00:16:09.183 "raid_level": "raid5f", 00:16:09.183 "superblock": true, 00:16:09.183 "num_base_bdevs": 4, 00:16:09.183 "num_base_bdevs_discovered": 2, 00:16:09.183 "num_base_bdevs_operational": 4, 00:16:09.183 "base_bdevs_list": [ 00:16:09.183 { 00:16:09.183 "name": "BaseBdev1", 00:16:09.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.183 "is_configured": false, 00:16:09.183 "data_offset": 0, 00:16:09.183 "data_size": 0 00:16:09.183 }, 00:16:09.183 { 00:16:09.183 "name": null, 00:16:09.183 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:09.183 "is_configured": false, 00:16:09.183 "data_offset": 0, 00:16:09.183 "data_size": 63488 00:16:09.183 }, 00:16:09.183 { 00:16:09.183 "name": "BaseBdev3", 00:16:09.183 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:09.183 "is_configured": true, 00:16:09.183 "data_offset": 2048, 00:16:09.183 "data_size": 63488 00:16:09.183 }, 00:16:09.183 { 00:16:09.183 "name": "BaseBdev4", 00:16:09.183 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:09.183 "is_configured": true, 00:16:09.183 "data_offset": 2048, 00:16:09.183 "data_size": 63488 00:16:09.183 } 00:16:09.183 ] 00:16:09.183 }' 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.183 21:47:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.444 [2024-09-29 21:47:28.393062] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.444 BaseBdev1 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.444 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.444 [ 00:16:09.444 { 00:16:09.444 "name": "BaseBdev1", 00:16:09.444 "aliases": [ 00:16:09.444 "ebbf5a5f-a3ad-4313-8305-abe3aa87746c" 00:16:09.444 ], 00:16:09.444 "product_name": "Malloc disk", 00:16:09.444 "block_size": 512, 00:16:09.444 "num_blocks": 65536, 00:16:09.444 "uuid": "ebbf5a5f-a3ad-4313-8305-abe3aa87746c", 00:16:09.444 "assigned_rate_limits": { 00:16:09.444 "rw_ios_per_sec": 0, 00:16:09.444 "rw_mbytes_per_sec": 0, 00:16:09.444 "r_mbytes_per_sec": 0, 00:16:09.444 "w_mbytes_per_sec": 0 00:16:09.444 }, 00:16:09.444 "claimed": true, 00:16:09.444 "claim_type": "exclusive_write", 00:16:09.444 "zoned": false, 00:16:09.444 "supported_io_types": { 00:16:09.444 "read": true, 00:16:09.444 "write": true, 00:16:09.444 "unmap": true, 00:16:09.444 "flush": true, 00:16:09.444 "reset": true, 00:16:09.444 "nvme_admin": false, 00:16:09.444 "nvme_io": false, 00:16:09.444 "nvme_io_md": false, 00:16:09.444 "write_zeroes": true, 00:16:09.444 "zcopy": true, 00:16:09.444 "get_zone_info": false, 00:16:09.444 "zone_management": false, 00:16:09.444 "zone_append": false, 00:16:09.444 "compare": false, 00:16:09.444 "compare_and_write": false, 00:16:09.444 "abort": true, 00:16:09.444 "seek_hole": false, 00:16:09.444 "seek_data": false, 00:16:09.444 "copy": true, 00:16:09.444 "nvme_iov_md": false 00:16:09.444 }, 00:16:09.444 "memory_domains": [ 00:16:09.704 { 00:16:09.704 "dma_device_id": "system", 00:16:09.704 "dma_device_type": 1 00:16:09.704 }, 00:16:09.704 { 00:16:09.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.704 "dma_device_type": 2 00:16:09.704 } 00:16:09.704 ], 00:16:09.704 "driver_specific": {} 00:16:09.704 } 00:16:09.704 ] 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.704 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.704 "name": "Existed_Raid", 00:16:09.704 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:09.705 "strip_size_kb": 64, 00:16:09.705 "state": "configuring", 00:16:09.705 "raid_level": "raid5f", 00:16:09.705 "superblock": true, 00:16:09.705 "num_base_bdevs": 4, 00:16:09.705 "num_base_bdevs_discovered": 3, 00:16:09.705 "num_base_bdevs_operational": 4, 00:16:09.705 "base_bdevs_list": [ 00:16:09.705 { 00:16:09.705 "name": "BaseBdev1", 00:16:09.705 "uuid": "ebbf5a5f-a3ad-4313-8305-abe3aa87746c", 00:16:09.705 "is_configured": true, 00:16:09.705 "data_offset": 2048, 00:16:09.705 "data_size": 63488 00:16:09.705 }, 00:16:09.705 { 00:16:09.705 "name": null, 00:16:09.705 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:09.705 "is_configured": false, 00:16:09.705 "data_offset": 0, 00:16:09.705 "data_size": 63488 00:16:09.705 }, 00:16:09.705 { 00:16:09.705 "name": "BaseBdev3", 00:16:09.705 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:09.705 "is_configured": true, 00:16:09.705 "data_offset": 2048, 00:16:09.705 "data_size": 63488 00:16:09.705 }, 00:16:09.705 { 00:16:09.705 "name": "BaseBdev4", 00:16:09.705 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:09.705 "is_configured": true, 00:16:09.705 "data_offset": 2048, 00:16:09.705 "data_size": 63488 00:16:09.705 } 00:16:09.705 ] 00:16:09.705 }' 00:16:09.705 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.705 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.965 [2024-09-29 21:47:28.912211] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.965 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.224 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.224 "name": "Existed_Raid", 00:16:10.224 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:10.224 "strip_size_kb": 64, 00:16:10.224 "state": "configuring", 00:16:10.224 "raid_level": "raid5f", 00:16:10.224 "superblock": true, 00:16:10.224 "num_base_bdevs": 4, 00:16:10.224 "num_base_bdevs_discovered": 2, 00:16:10.224 "num_base_bdevs_operational": 4, 00:16:10.224 "base_bdevs_list": [ 00:16:10.224 { 00:16:10.224 "name": "BaseBdev1", 00:16:10.224 "uuid": "ebbf5a5f-a3ad-4313-8305-abe3aa87746c", 00:16:10.224 "is_configured": true, 00:16:10.224 "data_offset": 2048, 00:16:10.224 "data_size": 63488 00:16:10.224 }, 00:16:10.224 { 00:16:10.224 "name": null, 00:16:10.224 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:10.224 "is_configured": false, 00:16:10.224 "data_offset": 0, 00:16:10.224 "data_size": 63488 00:16:10.224 }, 00:16:10.224 { 00:16:10.224 "name": null, 00:16:10.224 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:10.224 "is_configured": false, 00:16:10.224 "data_offset": 0, 00:16:10.224 "data_size": 63488 00:16:10.224 }, 00:16:10.224 { 00:16:10.224 "name": "BaseBdev4", 00:16:10.224 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:10.224 "is_configured": true, 00:16:10.224 "data_offset": 2048, 00:16:10.224 "data_size": 63488 00:16:10.224 } 00:16:10.224 ] 00:16:10.224 }' 00:16:10.224 21:47:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.224 21:47:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.483 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.483 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.483 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.484 [2024-09-29 21:47:29.403362] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.484 "name": "Existed_Raid", 00:16:10.484 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:10.484 "strip_size_kb": 64, 00:16:10.484 "state": "configuring", 00:16:10.484 "raid_level": "raid5f", 00:16:10.484 "superblock": true, 00:16:10.484 "num_base_bdevs": 4, 00:16:10.484 "num_base_bdevs_discovered": 3, 00:16:10.484 "num_base_bdevs_operational": 4, 00:16:10.484 "base_bdevs_list": [ 00:16:10.484 { 00:16:10.484 "name": "BaseBdev1", 00:16:10.484 "uuid": "ebbf5a5f-a3ad-4313-8305-abe3aa87746c", 00:16:10.484 "is_configured": true, 00:16:10.484 "data_offset": 2048, 00:16:10.484 "data_size": 63488 00:16:10.484 }, 00:16:10.484 { 00:16:10.484 "name": null, 00:16:10.484 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:10.484 "is_configured": false, 00:16:10.484 "data_offset": 0, 00:16:10.484 "data_size": 63488 00:16:10.484 }, 00:16:10.484 { 00:16:10.484 "name": "BaseBdev3", 00:16:10.484 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:10.484 "is_configured": true, 00:16:10.484 "data_offset": 2048, 00:16:10.484 "data_size": 63488 00:16:10.484 }, 00:16:10.484 { 00:16:10.484 "name": "BaseBdev4", 00:16:10.484 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:10.484 "is_configured": true, 00:16:10.484 "data_offset": 2048, 00:16:10.484 "data_size": 63488 00:16:10.484 } 00:16:10.484 ] 00:16:10.484 }' 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.484 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.054 [2024-09-29 21:47:29.902535] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.054 21:47:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.054 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.054 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.054 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.317 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.317 "name": "Existed_Raid", 00:16:11.317 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:11.317 "strip_size_kb": 64, 00:16:11.317 "state": "configuring", 00:16:11.317 "raid_level": "raid5f", 00:16:11.317 "superblock": true, 00:16:11.317 "num_base_bdevs": 4, 00:16:11.317 "num_base_bdevs_discovered": 2, 00:16:11.317 "num_base_bdevs_operational": 4, 00:16:11.317 "base_bdevs_list": [ 00:16:11.317 { 00:16:11.317 "name": null, 00:16:11.317 "uuid": "ebbf5a5f-a3ad-4313-8305-abe3aa87746c", 00:16:11.317 "is_configured": false, 00:16:11.317 "data_offset": 0, 00:16:11.317 "data_size": 63488 00:16:11.317 }, 00:16:11.317 { 00:16:11.317 "name": null, 00:16:11.317 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:11.317 "is_configured": false, 00:16:11.317 "data_offset": 0, 00:16:11.317 "data_size": 63488 00:16:11.317 }, 00:16:11.317 { 00:16:11.317 "name": "BaseBdev3", 00:16:11.317 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:11.317 "is_configured": true, 00:16:11.317 "data_offset": 2048, 00:16:11.317 "data_size": 63488 00:16:11.317 }, 00:16:11.317 { 00:16:11.317 "name": "BaseBdev4", 00:16:11.317 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:11.317 "is_configured": true, 00:16:11.317 "data_offset": 2048, 00:16:11.317 "data_size": 63488 00:16:11.317 } 00:16:11.317 ] 00:16:11.317 }' 00:16:11.317 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.317 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.578 [2024-09-29 21:47:30.510600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.578 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.838 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.838 "name": "Existed_Raid", 00:16:11.838 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:11.838 "strip_size_kb": 64, 00:16:11.838 "state": "configuring", 00:16:11.838 "raid_level": "raid5f", 00:16:11.838 "superblock": true, 00:16:11.838 "num_base_bdevs": 4, 00:16:11.838 "num_base_bdevs_discovered": 3, 00:16:11.838 "num_base_bdevs_operational": 4, 00:16:11.838 "base_bdevs_list": [ 00:16:11.838 { 00:16:11.838 "name": null, 00:16:11.838 "uuid": "ebbf5a5f-a3ad-4313-8305-abe3aa87746c", 00:16:11.838 "is_configured": false, 00:16:11.838 "data_offset": 0, 00:16:11.838 "data_size": 63488 00:16:11.838 }, 00:16:11.838 { 00:16:11.838 "name": "BaseBdev2", 00:16:11.838 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:11.838 "is_configured": true, 00:16:11.838 "data_offset": 2048, 00:16:11.838 "data_size": 63488 00:16:11.838 }, 00:16:11.838 { 00:16:11.838 "name": "BaseBdev3", 00:16:11.838 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:11.838 "is_configured": true, 00:16:11.838 "data_offset": 2048, 00:16:11.838 "data_size": 63488 00:16:11.838 }, 00:16:11.838 { 00:16:11.838 "name": "BaseBdev4", 00:16:11.838 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:11.838 "is_configured": true, 00:16:11.838 "data_offset": 2048, 00:16:11.838 "data_size": 63488 00:16:11.838 } 00:16:11.838 ] 00:16:11.838 }' 00:16:11.838 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.838 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ebbf5a5f-a3ad-4313-8305-abe3aa87746c 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.098 21:47:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.098 [2024-09-29 21:47:31.028870] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:12.098 [2024-09-29 21:47:31.029113] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:12.098 [2024-09-29 21:47:31.029125] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:12.098 [2024-09-29 21:47:31.029363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:12.098 NewBaseBdev 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.098 [2024-09-29 21:47:31.036663] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:12.098 [2024-09-29 21:47:31.036690] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:12.098 [2024-09-29 21:47:31.036945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.098 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.098 [ 00:16:12.098 { 00:16:12.098 "name": "NewBaseBdev", 00:16:12.098 "aliases": [ 00:16:12.098 "ebbf5a5f-a3ad-4313-8305-abe3aa87746c" 00:16:12.098 ], 00:16:12.098 "product_name": "Malloc disk", 00:16:12.098 "block_size": 512, 00:16:12.098 "num_blocks": 65536, 00:16:12.098 "uuid": "ebbf5a5f-a3ad-4313-8305-abe3aa87746c", 00:16:12.098 "assigned_rate_limits": { 00:16:12.098 "rw_ios_per_sec": 0, 00:16:12.098 "rw_mbytes_per_sec": 0, 00:16:12.098 "r_mbytes_per_sec": 0, 00:16:12.098 "w_mbytes_per_sec": 0 00:16:12.098 }, 00:16:12.098 "claimed": true, 00:16:12.098 "claim_type": "exclusive_write", 00:16:12.098 "zoned": false, 00:16:12.098 "supported_io_types": { 00:16:12.098 "read": true, 00:16:12.098 "write": true, 00:16:12.098 "unmap": true, 00:16:12.098 "flush": true, 00:16:12.098 "reset": true, 00:16:12.098 "nvme_admin": false, 00:16:12.098 "nvme_io": false, 00:16:12.098 "nvme_io_md": false, 00:16:12.098 "write_zeroes": true, 00:16:12.098 "zcopy": true, 00:16:12.098 "get_zone_info": false, 00:16:12.098 "zone_management": false, 00:16:12.098 "zone_append": false, 00:16:12.098 "compare": false, 00:16:12.098 "compare_and_write": false, 00:16:12.098 "abort": true, 00:16:12.098 "seek_hole": false, 00:16:12.098 "seek_data": false, 00:16:12.098 "copy": true, 00:16:12.099 "nvme_iov_md": false 00:16:12.099 }, 00:16:12.099 "memory_domains": [ 00:16:12.099 { 00:16:12.099 "dma_device_id": "system", 00:16:12.099 "dma_device_type": 1 00:16:12.099 }, 00:16:12.099 { 00:16:12.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.099 "dma_device_type": 2 00:16:12.099 } 00:16:12.099 ], 00:16:12.099 "driver_specific": {} 00:16:12.099 } 00:16:12.099 ] 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.099 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.358 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.358 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.358 "name": "Existed_Raid", 00:16:12.358 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:12.358 "strip_size_kb": 64, 00:16:12.358 "state": "online", 00:16:12.359 "raid_level": "raid5f", 00:16:12.359 "superblock": true, 00:16:12.359 "num_base_bdevs": 4, 00:16:12.359 "num_base_bdevs_discovered": 4, 00:16:12.359 "num_base_bdevs_operational": 4, 00:16:12.359 "base_bdevs_list": [ 00:16:12.359 { 00:16:12.359 "name": "NewBaseBdev", 00:16:12.359 "uuid": "ebbf5a5f-a3ad-4313-8305-abe3aa87746c", 00:16:12.359 "is_configured": true, 00:16:12.359 "data_offset": 2048, 00:16:12.359 "data_size": 63488 00:16:12.359 }, 00:16:12.359 { 00:16:12.359 "name": "BaseBdev2", 00:16:12.359 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:12.359 "is_configured": true, 00:16:12.359 "data_offset": 2048, 00:16:12.359 "data_size": 63488 00:16:12.359 }, 00:16:12.359 { 00:16:12.359 "name": "BaseBdev3", 00:16:12.359 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:12.359 "is_configured": true, 00:16:12.359 "data_offset": 2048, 00:16:12.359 "data_size": 63488 00:16:12.359 }, 00:16:12.359 { 00:16:12.359 "name": "BaseBdev4", 00:16:12.359 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:12.359 "is_configured": true, 00:16:12.359 "data_offset": 2048, 00:16:12.359 "data_size": 63488 00:16:12.359 } 00:16:12.359 ] 00:16:12.359 }' 00:16:12.359 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.359 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.619 [2024-09-29 21:47:31.476303] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:12.619 "name": "Existed_Raid", 00:16:12.619 "aliases": [ 00:16:12.619 "f7a14604-ff81-481f-8293-e81079cd440f" 00:16:12.619 ], 00:16:12.619 "product_name": "Raid Volume", 00:16:12.619 "block_size": 512, 00:16:12.619 "num_blocks": 190464, 00:16:12.619 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:12.619 "assigned_rate_limits": { 00:16:12.619 "rw_ios_per_sec": 0, 00:16:12.619 "rw_mbytes_per_sec": 0, 00:16:12.619 "r_mbytes_per_sec": 0, 00:16:12.619 "w_mbytes_per_sec": 0 00:16:12.619 }, 00:16:12.619 "claimed": false, 00:16:12.619 "zoned": false, 00:16:12.619 "supported_io_types": { 00:16:12.619 "read": true, 00:16:12.619 "write": true, 00:16:12.619 "unmap": false, 00:16:12.619 "flush": false, 00:16:12.619 "reset": true, 00:16:12.619 "nvme_admin": false, 00:16:12.619 "nvme_io": false, 00:16:12.619 "nvme_io_md": false, 00:16:12.619 "write_zeroes": true, 00:16:12.619 "zcopy": false, 00:16:12.619 "get_zone_info": false, 00:16:12.619 "zone_management": false, 00:16:12.619 "zone_append": false, 00:16:12.619 "compare": false, 00:16:12.619 "compare_and_write": false, 00:16:12.619 "abort": false, 00:16:12.619 "seek_hole": false, 00:16:12.619 "seek_data": false, 00:16:12.619 "copy": false, 00:16:12.619 "nvme_iov_md": false 00:16:12.619 }, 00:16:12.619 "driver_specific": { 00:16:12.619 "raid": { 00:16:12.619 "uuid": "f7a14604-ff81-481f-8293-e81079cd440f", 00:16:12.619 "strip_size_kb": 64, 00:16:12.619 "state": "online", 00:16:12.619 "raid_level": "raid5f", 00:16:12.619 "superblock": true, 00:16:12.619 "num_base_bdevs": 4, 00:16:12.619 "num_base_bdevs_discovered": 4, 00:16:12.619 "num_base_bdevs_operational": 4, 00:16:12.619 "base_bdevs_list": [ 00:16:12.619 { 00:16:12.619 "name": "NewBaseBdev", 00:16:12.619 "uuid": "ebbf5a5f-a3ad-4313-8305-abe3aa87746c", 00:16:12.619 "is_configured": true, 00:16:12.619 "data_offset": 2048, 00:16:12.619 "data_size": 63488 00:16:12.619 }, 00:16:12.619 { 00:16:12.619 "name": "BaseBdev2", 00:16:12.619 "uuid": "85d75ba9-ed29-41ed-93ee-e964df4f3f77", 00:16:12.619 "is_configured": true, 00:16:12.619 "data_offset": 2048, 00:16:12.619 "data_size": 63488 00:16:12.619 }, 00:16:12.619 { 00:16:12.619 "name": "BaseBdev3", 00:16:12.619 "uuid": "86426098-5992-4aed-9e4d-c779e1841fa1", 00:16:12.619 "is_configured": true, 00:16:12.619 "data_offset": 2048, 00:16:12.619 "data_size": 63488 00:16:12.619 }, 00:16:12.619 { 00:16:12.619 "name": "BaseBdev4", 00:16:12.619 "uuid": "562f8d5b-d5a5-4d71-9bc8-3e93e8ccf2a2", 00:16:12.619 "is_configured": true, 00:16:12.619 "data_offset": 2048, 00:16:12.619 "data_size": 63488 00:16:12.619 } 00:16:12.619 ] 00:16:12.619 } 00:16:12.619 } 00:16:12.619 }' 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:12.619 BaseBdev2 00:16:12.619 BaseBdev3 00:16:12.619 BaseBdev4' 00:16:12.619 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:12.879 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.880 [2024-09-29 21:47:31.787573] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.880 [2024-09-29 21:47:31.787601] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.880 [2024-09-29 21:47:31.787666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.880 [2024-09-29 21:47:31.787933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.880 [2024-09-29 21:47:31.787951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83492 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83492 ']' 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83492 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83492 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:12.880 killing process with pid 83492 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83492' 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83492 00:16:12.880 [2024-09-29 21:47:31.834687] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.880 21:47:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83492 00:16:13.450 [2024-09-29 21:47:32.203871] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.833 21:47:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:14.833 00:16:14.833 real 0m11.425s 00:16:14.833 user 0m18.007s 00:16:14.833 sys 0m2.216s 00:16:14.833 21:47:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:14.833 21:47:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.833 ************************************ 00:16:14.833 END TEST raid5f_state_function_test_sb 00:16:14.833 ************************************ 00:16:14.833 21:47:33 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:14.833 21:47:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:14.833 21:47:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:14.833 21:47:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.833 ************************************ 00:16:14.833 START TEST raid5f_superblock_test 00:16:14.833 ************************************ 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84162 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84162 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84162 ']' 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.833 21:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:14.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.834 21:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.834 21:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:14.834 21:47:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.834 [2024-09-29 21:47:33.569850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:14.834 [2024-09-29 21:47:33.569971] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84162 ] 00:16:14.834 [2024-09-29 21:47:33.736290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.094 [2024-09-29 21:47:33.931242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.354 [2024-09-29 21:47:34.123525] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.354 [2024-09-29 21:47:34.123575] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 malloc1 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 [2024-09-29 21:47:34.440833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:15.615 [2024-09-29 21:47:34.440893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.615 [2024-09-29 21:47:34.440912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:15.615 [2024-09-29 21:47:34.440923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.615 [2024-09-29 21:47:34.442825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.615 [2024-09-29 21:47:34.442858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:15.615 pt1 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 malloc2 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 [2024-09-29 21:47:34.510206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.615 [2024-09-29 21:47:34.510262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.615 [2024-09-29 21:47:34.510280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:15.615 [2024-09-29 21:47:34.510288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.615 [2024-09-29 21:47:34.512167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.615 [2024-09-29 21:47:34.512199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.615 pt2 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 malloc3 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 [2024-09-29 21:47:34.563409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:15.615 [2024-09-29 21:47:34.563457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.615 [2024-09-29 21:47:34.563474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:15.615 [2024-09-29 21:47:34.563482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.615 [2024-09-29 21:47:34.565319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.615 [2024-09-29 21:47:34.565350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:15.615 pt3 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 malloc4 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 [2024-09-29 21:47:34.616525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:15.876 [2024-09-29 21:47:34.616573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.876 [2024-09-29 21:47:34.616590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:15.876 [2024-09-29 21:47:34.616599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.876 [2024-09-29 21:47:34.618651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.876 [2024-09-29 21:47:34.618684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:15.876 pt4 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 [2024-09-29 21:47:34.628563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:15.876 [2024-09-29 21:47:34.630187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.876 [2024-09-29 21:47:34.630247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:15.876 [2024-09-29 21:47:34.630304] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:15.876 [2024-09-29 21:47:34.630479] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:15.876 [2024-09-29 21:47:34.630503] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:15.876 [2024-09-29 21:47:34.630722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:15.876 [2024-09-29 21:47:34.637173] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:15.876 [2024-09-29 21:47:34.637197] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:15.876 [2024-09-29 21:47:34.637362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.876 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.876 "name": "raid_bdev1", 00:16:15.876 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:15.876 "strip_size_kb": 64, 00:16:15.876 "state": "online", 00:16:15.876 "raid_level": "raid5f", 00:16:15.876 "superblock": true, 00:16:15.876 "num_base_bdevs": 4, 00:16:15.876 "num_base_bdevs_discovered": 4, 00:16:15.876 "num_base_bdevs_operational": 4, 00:16:15.876 "base_bdevs_list": [ 00:16:15.876 { 00:16:15.876 "name": "pt1", 00:16:15.876 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 2048, 00:16:15.876 "data_size": 63488 00:16:15.876 }, 00:16:15.876 { 00:16:15.876 "name": "pt2", 00:16:15.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 2048, 00:16:15.876 "data_size": 63488 00:16:15.876 }, 00:16:15.876 { 00:16:15.876 "name": "pt3", 00:16:15.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:15.877 "is_configured": true, 00:16:15.877 "data_offset": 2048, 00:16:15.877 "data_size": 63488 00:16:15.877 }, 00:16:15.877 { 00:16:15.877 "name": "pt4", 00:16:15.877 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:15.877 "is_configured": true, 00:16:15.877 "data_offset": 2048, 00:16:15.877 "data_size": 63488 00:16:15.877 } 00:16:15.877 ] 00:16:15.877 }' 00:16:15.877 21:47:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.877 21:47:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.137 [2024-09-29 21:47:35.076396] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.137 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.137 "name": "raid_bdev1", 00:16:16.137 "aliases": [ 00:16:16.137 "94519c6b-1012-49d2-9a55-667110f6f411" 00:16:16.137 ], 00:16:16.137 "product_name": "Raid Volume", 00:16:16.137 "block_size": 512, 00:16:16.137 "num_blocks": 190464, 00:16:16.137 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:16.137 "assigned_rate_limits": { 00:16:16.137 "rw_ios_per_sec": 0, 00:16:16.137 "rw_mbytes_per_sec": 0, 00:16:16.137 "r_mbytes_per_sec": 0, 00:16:16.137 "w_mbytes_per_sec": 0 00:16:16.137 }, 00:16:16.137 "claimed": false, 00:16:16.137 "zoned": false, 00:16:16.137 "supported_io_types": { 00:16:16.137 "read": true, 00:16:16.137 "write": true, 00:16:16.137 "unmap": false, 00:16:16.137 "flush": false, 00:16:16.137 "reset": true, 00:16:16.137 "nvme_admin": false, 00:16:16.137 "nvme_io": false, 00:16:16.137 "nvme_io_md": false, 00:16:16.137 "write_zeroes": true, 00:16:16.137 "zcopy": false, 00:16:16.137 "get_zone_info": false, 00:16:16.137 "zone_management": false, 00:16:16.137 "zone_append": false, 00:16:16.137 "compare": false, 00:16:16.137 "compare_and_write": false, 00:16:16.137 "abort": false, 00:16:16.137 "seek_hole": false, 00:16:16.137 "seek_data": false, 00:16:16.137 "copy": false, 00:16:16.137 "nvme_iov_md": false 00:16:16.137 }, 00:16:16.137 "driver_specific": { 00:16:16.137 "raid": { 00:16:16.137 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:16.137 "strip_size_kb": 64, 00:16:16.137 "state": "online", 00:16:16.137 "raid_level": "raid5f", 00:16:16.137 "superblock": true, 00:16:16.137 "num_base_bdevs": 4, 00:16:16.137 "num_base_bdevs_discovered": 4, 00:16:16.137 "num_base_bdevs_operational": 4, 00:16:16.137 "base_bdevs_list": [ 00:16:16.137 { 00:16:16.137 "name": "pt1", 00:16:16.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.137 "is_configured": true, 00:16:16.137 "data_offset": 2048, 00:16:16.137 "data_size": 63488 00:16:16.137 }, 00:16:16.137 { 00:16:16.137 "name": "pt2", 00:16:16.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.137 "is_configured": true, 00:16:16.137 "data_offset": 2048, 00:16:16.137 "data_size": 63488 00:16:16.137 }, 00:16:16.137 { 00:16:16.137 "name": "pt3", 00:16:16.137 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.137 "is_configured": true, 00:16:16.137 "data_offset": 2048, 00:16:16.137 "data_size": 63488 00:16:16.137 }, 00:16:16.137 { 00:16:16.137 "name": "pt4", 00:16:16.137 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:16.137 "is_configured": true, 00:16:16.137 "data_offset": 2048, 00:16:16.137 "data_size": 63488 00:16:16.137 } 00:16:16.137 ] 00:16:16.137 } 00:16:16.137 } 00:16:16.137 }' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:16.398 pt2 00:16:16.398 pt3 00:16:16.398 pt4' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.398 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:16.398 [2024-09-29 21:47:35.380078] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=94519c6b-1012-49d2-9a55-667110f6f411 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 94519c6b-1012-49d2-9a55-667110f6f411 ']' 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.659 [2024-09-29 21:47:35.427834] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.659 [2024-09-29 21:47:35.427858] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.659 [2024-09-29 21:47:35.427911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.659 [2024-09-29 21:47:35.427974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.659 [2024-09-29 21:47:35.427987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:16.659 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.660 [2024-09-29 21:47:35.595570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:16.660 [2024-09-29 21:47:35.597224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:16.660 [2024-09-29 21:47:35.597270] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:16.660 [2024-09-29 21:47:35.597299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:16.660 [2024-09-29 21:47:35.597339] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:16.660 [2024-09-29 21:47:35.597373] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:16.660 [2024-09-29 21:47:35.597389] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:16.660 [2024-09-29 21:47:35.597407] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:16.660 [2024-09-29 21:47:35.597418] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.660 [2024-09-29 21:47:35.597429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:16.660 request: 00:16:16.660 { 00:16:16.660 "name": "raid_bdev1", 00:16:16.660 "raid_level": "raid5f", 00:16:16.660 "base_bdevs": [ 00:16:16.660 "malloc1", 00:16:16.660 "malloc2", 00:16:16.660 "malloc3", 00:16:16.660 "malloc4" 00:16:16.660 ], 00:16:16.660 "strip_size_kb": 64, 00:16:16.660 "superblock": false, 00:16:16.660 "method": "bdev_raid_create", 00:16:16.660 "req_id": 1 00:16:16.660 } 00:16:16.660 Got JSON-RPC error response 00:16:16.660 response: 00:16:16.660 { 00:16:16.660 "code": -17, 00:16:16.660 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:16.660 } 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.660 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.920 [2024-09-29 21:47:35.659435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:16.920 [2024-09-29 21:47:35.659480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.920 [2024-09-29 21:47:35.659493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:16.920 [2024-09-29 21:47:35.659503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.920 [2024-09-29 21:47:35.661433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.920 [2024-09-29 21:47:35.661472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:16.920 [2024-09-29 21:47:35.661530] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:16.920 [2024-09-29 21:47:35.661586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.920 pt1 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.920 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.920 "name": "raid_bdev1", 00:16:16.920 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:16.920 "strip_size_kb": 64, 00:16:16.920 "state": "configuring", 00:16:16.920 "raid_level": "raid5f", 00:16:16.920 "superblock": true, 00:16:16.920 "num_base_bdevs": 4, 00:16:16.920 "num_base_bdevs_discovered": 1, 00:16:16.920 "num_base_bdevs_operational": 4, 00:16:16.920 "base_bdevs_list": [ 00:16:16.920 { 00:16:16.920 "name": "pt1", 00:16:16.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.920 "is_configured": true, 00:16:16.920 "data_offset": 2048, 00:16:16.920 "data_size": 63488 00:16:16.920 }, 00:16:16.920 { 00:16:16.920 "name": null, 00:16:16.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.921 "is_configured": false, 00:16:16.921 "data_offset": 2048, 00:16:16.921 "data_size": 63488 00:16:16.921 }, 00:16:16.921 { 00:16:16.921 "name": null, 00:16:16.921 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:16.921 "is_configured": false, 00:16:16.921 "data_offset": 2048, 00:16:16.921 "data_size": 63488 00:16:16.921 }, 00:16:16.921 { 00:16:16.921 "name": null, 00:16:16.921 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:16.921 "is_configured": false, 00:16:16.921 "data_offset": 2048, 00:16:16.921 "data_size": 63488 00:16:16.921 } 00:16:16.921 ] 00:16:16.921 }' 00:16:16.921 21:47:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.921 21:47:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.181 [2024-09-29 21:47:36.134649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.181 [2024-09-29 21:47:36.134694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.181 [2024-09-29 21:47:36.134706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:17.181 [2024-09-29 21:47:36.134716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.181 [2024-09-29 21:47:36.135067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.181 [2024-09-29 21:47:36.135087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.181 [2024-09-29 21:47:36.135139] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:17.181 [2024-09-29 21:47:36.135158] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.181 pt2 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.181 [2024-09-29 21:47:36.146649] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.181 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.441 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.441 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.441 "name": "raid_bdev1", 00:16:17.441 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:17.441 "strip_size_kb": 64, 00:16:17.441 "state": "configuring", 00:16:17.441 "raid_level": "raid5f", 00:16:17.441 "superblock": true, 00:16:17.441 "num_base_bdevs": 4, 00:16:17.441 "num_base_bdevs_discovered": 1, 00:16:17.441 "num_base_bdevs_operational": 4, 00:16:17.441 "base_bdevs_list": [ 00:16:17.441 { 00:16:17.441 "name": "pt1", 00:16:17.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.441 "is_configured": true, 00:16:17.441 "data_offset": 2048, 00:16:17.441 "data_size": 63488 00:16:17.441 }, 00:16:17.441 { 00:16:17.441 "name": null, 00:16:17.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.441 "is_configured": false, 00:16:17.441 "data_offset": 0, 00:16:17.441 "data_size": 63488 00:16:17.441 }, 00:16:17.441 { 00:16:17.441 "name": null, 00:16:17.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.441 "is_configured": false, 00:16:17.441 "data_offset": 2048, 00:16:17.441 "data_size": 63488 00:16:17.441 }, 00:16:17.441 { 00:16:17.441 "name": null, 00:16:17.441 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.441 "is_configured": false, 00:16:17.441 "data_offset": 2048, 00:16:17.441 "data_size": 63488 00:16:17.441 } 00:16:17.441 ] 00:16:17.441 }' 00:16:17.441 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.441 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.702 [2024-09-29 21:47:36.569902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:17.702 [2024-09-29 21:47:36.569944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.702 [2024-09-29 21:47:36.569958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:17.702 [2024-09-29 21:47:36.569965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.702 [2024-09-29 21:47:36.570298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.702 [2024-09-29 21:47:36.570316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:17.702 [2024-09-29 21:47:36.570368] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:17.702 [2024-09-29 21:47:36.570390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:17.702 pt2 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.702 [2024-09-29 21:47:36.581886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:17.702 [2024-09-29 21:47:36.581927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.702 [2024-09-29 21:47:36.581941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:17.702 [2024-09-29 21:47:36.581949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.702 [2024-09-29 21:47:36.582254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.702 [2024-09-29 21:47:36.582276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:17.702 [2024-09-29 21:47:36.582326] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:17.702 [2024-09-29 21:47:36.582345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:17.702 pt3 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.702 [2024-09-29 21:47:36.593848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:17.702 [2024-09-29 21:47:36.593889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.702 [2024-09-29 21:47:36.593906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:17.702 [2024-09-29 21:47:36.593913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.702 [2024-09-29 21:47:36.594222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.702 [2024-09-29 21:47:36.594238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:17.702 [2024-09-29 21:47:36.594286] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:17.702 [2024-09-29 21:47:36.594306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:17.702 [2024-09-29 21:47:36.594417] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:17.702 [2024-09-29 21:47:36.594424] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:17.702 [2024-09-29 21:47:36.594627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:17.702 [2024-09-29 21:47:36.601343] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:17.702 [2024-09-29 21:47:36.601367] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:17.702 [2024-09-29 21:47:36.601522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.702 pt4 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.702 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.702 "name": "raid_bdev1", 00:16:17.702 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:17.703 "strip_size_kb": 64, 00:16:17.703 "state": "online", 00:16:17.703 "raid_level": "raid5f", 00:16:17.703 "superblock": true, 00:16:17.703 "num_base_bdevs": 4, 00:16:17.703 "num_base_bdevs_discovered": 4, 00:16:17.703 "num_base_bdevs_operational": 4, 00:16:17.703 "base_bdevs_list": [ 00:16:17.703 { 00:16:17.703 "name": "pt1", 00:16:17.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.703 "is_configured": true, 00:16:17.703 "data_offset": 2048, 00:16:17.703 "data_size": 63488 00:16:17.703 }, 00:16:17.703 { 00:16:17.703 "name": "pt2", 00:16:17.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.703 "is_configured": true, 00:16:17.703 "data_offset": 2048, 00:16:17.703 "data_size": 63488 00:16:17.703 }, 00:16:17.703 { 00:16:17.703 "name": "pt3", 00:16:17.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.703 "is_configured": true, 00:16:17.703 "data_offset": 2048, 00:16:17.703 "data_size": 63488 00:16:17.703 }, 00:16:17.703 { 00:16:17.703 "name": "pt4", 00:16:17.703 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.703 "is_configured": true, 00:16:17.703 "data_offset": 2048, 00:16:17.703 "data_size": 63488 00:16:17.703 } 00:16:17.703 ] 00:16:17.703 }' 00:16:17.703 21:47:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.703 21:47:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.273 [2024-09-29 21:47:37.024661] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.273 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:18.273 "name": "raid_bdev1", 00:16:18.273 "aliases": [ 00:16:18.273 "94519c6b-1012-49d2-9a55-667110f6f411" 00:16:18.273 ], 00:16:18.273 "product_name": "Raid Volume", 00:16:18.273 "block_size": 512, 00:16:18.273 "num_blocks": 190464, 00:16:18.273 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:18.273 "assigned_rate_limits": { 00:16:18.273 "rw_ios_per_sec": 0, 00:16:18.273 "rw_mbytes_per_sec": 0, 00:16:18.273 "r_mbytes_per_sec": 0, 00:16:18.273 "w_mbytes_per_sec": 0 00:16:18.273 }, 00:16:18.274 "claimed": false, 00:16:18.274 "zoned": false, 00:16:18.274 "supported_io_types": { 00:16:18.274 "read": true, 00:16:18.274 "write": true, 00:16:18.274 "unmap": false, 00:16:18.274 "flush": false, 00:16:18.274 "reset": true, 00:16:18.274 "nvme_admin": false, 00:16:18.274 "nvme_io": false, 00:16:18.274 "nvme_io_md": false, 00:16:18.274 "write_zeroes": true, 00:16:18.274 "zcopy": false, 00:16:18.274 "get_zone_info": false, 00:16:18.274 "zone_management": false, 00:16:18.274 "zone_append": false, 00:16:18.274 "compare": false, 00:16:18.274 "compare_and_write": false, 00:16:18.274 "abort": false, 00:16:18.274 "seek_hole": false, 00:16:18.274 "seek_data": false, 00:16:18.274 "copy": false, 00:16:18.274 "nvme_iov_md": false 00:16:18.274 }, 00:16:18.274 "driver_specific": { 00:16:18.274 "raid": { 00:16:18.274 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:18.274 "strip_size_kb": 64, 00:16:18.274 "state": "online", 00:16:18.274 "raid_level": "raid5f", 00:16:18.274 "superblock": true, 00:16:18.274 "num_base_bdevs": 4, 00:16:18.274 "num_base_bdevs_discovered": 4, 00:16:18.274 "num_base_bdevs_operational": 4, 00:16:18.274 "base_bdevs_list": [ 00:16:18.274 { 00:16:18.274 "name": "pt1", 00:16:18.274 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:18.274 "is_configured": true, 00:16:18.274 "data_offset": 2048, 00:16:18.274 "data_size": 63488 00:16:18.274 }, 00:16:18.274 { 00:16:18.274 "name": "pt2", 00:16:18.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.274 "is_configured": true, 00:16:18.274 "data_offset": 2048, 00:16:18.274 "data_size": 63488 00:16:18.274 }, 00:16:18.274 { 00:16:18.274 "name": "pt3", 00:16:18.274 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.274 "is_configured": true, 00:16:18.274 "data_offset": 2048, 00:16:18.274 "data_size": 63488 00:16:18.274 }, 00:16:18.274 { 00:16:18.274 "name": "pt4", 00:16:18.274 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.274 "is_configured": true, 00:16:18.274 "data_offset": 2048, 00:16:18.274 "data_size": 63488 00:16:18.274 } 00:16:18.274 ] 00:16:18.274 } 00:16:18.274 } 00:16:18.274 }' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:18.274 pt2 00:16:18.274 pt3 00:16:18.274 pt4' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.274 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.534 [2024-09-29 21:47:37.328287] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 94519c6b-1012-49d2-9a55-667110f6f411 '!=' 94519c6b-1012-49d2-9a55-667110f6f411 ']' 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.534 [2024-09-29 21:47:37.372167] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:18.534 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.535 "name": "raid_bdev1", 00:16:18.535 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:18.535 "strip_size_kb": 64, 00:16:18.535 "state": "online", 00:16:18.535 "raid_level": "raid5f", 00:16:18.535 "superblock": true, 00:16:18.535 "num_base_bdevs": 4, 00:16:18.535 "num_base_bdevs_discovered": 3, 00:16:18.535 "num_base_bdevs_operational": 3, 00:16:18.535 "base_bdevs_list": [ 00:16:18.535 { 00:16:18.535 "name": null, 00:16:18.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.535 "is_configured": false, 00:16:18.535 "data_offset": 0, 00:16:18.535 "data_size": 63488 00:16:18.535 }, 00:16:18.535 { 00:16:18.535 "name": "pt2", 00:16:18.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.535 "is_configured": true, 00:16:18.535 "data_offset": 2048, 00:16:18.535 "data_size": 63488 00:16:18.535 }, 00:16:18.535 { 00:16:18.535 "name": "pt3", 00:16:18.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.535 "is_configured": true, 00:16:18.535 "data_offset": 2048, 00:16:18.535 "data_size": 63488 00:16:18.535 }, 00:16:18.535 { 00:16:18.535 "name": "pt4", 00:16:18.535 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:18.535 "is_configured": true, 00:16:18.535 "data_offset": 2048, 00:16:18.535 "data_size": 63488 00:16:18.535 } 00:16:18.535 ] 00:16:18.535 }' 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.535 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 [2024-09-29 21:47:37.791389] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.108 [2024-09-29 21:47:37.791417] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.108 [2024-09-29 21:47:37.791470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.108 [2024-09-29 21:47:37.791537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.108 [2024-09-29 21:47:37.791546] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 [2024-09-29 21:47:37.871248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:19.108 [2024-09-29 21:47:37.871291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.108 [2024-09-29 21:47:37.871308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:19.108 [2024-09-29 21:47:37.871316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.108 [2024-09-29 21:47:37.873318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.108 [2024-09-29 21:47:37.873350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:19.108 [2024-09-29 21:47:37.873416] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:19.108 [2024-09-29 21:47:37.873460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:19.108 pt2 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.108 "name": "raid_bdev1", 00:16:19.108 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:19.108 "strip_size_kb": 64, 00:16:19.108 "state": "configuring", 00:16:19.108 "raid_level": "raid5f", 00:16:19.108 "superblock": true, 00:16:19.108 "num_base_bdevs": 4, 00:16:19.108 "num_base_bdevs_discovered": 1, 00:16:19.108 "num_base_bdevs_operational": 3, 00:16:19.108 "base_bdevs_list": [ 00:16:19.108 { 00:16:19.108 "name": null, 00:16:19.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.108 "is_configured": false, 00:16:19.108 "data_offset": 2048, 00:16:19.108 "data_size": 63488 00:16:19.108 }, 00:16:19.108 { 00:16:19.108 "name": "pt2", 00:16:19.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.108 "is_configured": true, 00:16:19.108 "data_offset": 2048, 00:16:19.108 "data_size": 63488 00:16:19.108 }, 00:16:19.108 { 00:16:19.108 "name": null, 00:16:19.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.108 "is_configured": false, 00:16:19.108 "data_offset": 2048, 00:16:19.108 "data_size": 63488 00:16:19.108 }, 00:16:19.108 { 00:16:19.108 "name": null, 00:16:19.108 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.108 "is_configured": false, 00:16:19.108 "data_offset": 2048, 00:16:19.108 "data_size": 63488 00:16:19.108 } 00:16:19.108 ] 00:16:19.108 }' 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.108 21:47:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.369 [2024-09-29 21:47:38.318485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:19.369 [2024-09-29 21:47:38.318525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.369 [2024-09-29 21:47:38.318541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:19.369 [2024-09-29 21:47:38.318549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.369 [2024-09-29 21:47:38.318888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.369 [2024-09-29 21:47:38.318905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:19.369 [2024-09-29 21:47:38.318964] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:19.369 [2024-09-29 21:47:38.318990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:19.369 pt3 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.369 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.628 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.628 "name": "raid_bdev1", 00:16:19.628 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:19.628 "strip_size_kb": 64, 00:16:19.628 "state": "configuring", 00:16:19.628 "raid_level": "raid5f", 00:16:19.628 "superblock": true, 00:16:19.628 "num_base_bdevs": 4, 00:16:19.628 "num_base_bdevs_discovered": 2, 00:16:19.628 "num_base_bdevs_operational": 3, 00:16:19.628 "base_bdevs_list": [ 00:16:19.628 { 00:16:19.628 "name": null, 00:16:19.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.628 "is_configured": false, 00:16:19.628 "data_offset": 2048, 00:16:19.628 "data_size": 63488 00:16:19.628 }, 00:16:19.628 { 00:16:19.628 "name": "pt2", 00:16:19.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.628 "is_configured": true, 00:16:19.628 "data_offset": 2048, 00:16:19.628 "data_size": 63488 00:16:19.628 }, 00:16:19.628 { 00:16:19.628 "name": "pt3", 00:16:19.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.628 "is_configured": true, 00:16:19.628 "data_offset": 2048, 00:16:19.628 "data_size": 63488 00:16:19.628 }, 00:16:19.628 { 00:16:19.628 "name": null, 00:16:19.628 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.628 "is_configured": false, 00:16:19.628 "data_offset": 2048, 00:16:19.628 "data_size": 63488 00:16:19.628 } 00:16:19.628 ] 00:16:19.628 }' 00:16:19.628 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.628 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.889 [2024-09-29 21:47:38.713879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:19.889 [2024-09-29 21:47:38.713919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.889 [2024-09-29 21:47:38.713934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:19.889 [2024-09-29 21:47:38.713942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.889 [2024-09-29 21:47:38.714307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.889 [2024-09-29 21:47:38.714324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:19.889 [2024-09-29 21:47:38.714378] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:19.889 [2024-09-29 21:47:38.714395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:19.889 [2024-09-29 21:47:38.714503] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:19.889 [2024-09-29 21:47:38.714511] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:19.889 [2024-09-29 21:47:38.714727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:19.889 [2024-09-29 21:47:38.721615] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:19.889 [2024-09-29 21:47:38.721641] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:19.889 [2024-09-29 21:47:38.721890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.889 pt4 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.889 "name": "raid_bdev1", 00:16:19.889 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:19.889 "strip_size_kb": 64, 00:16:19.889 "state": "online", 00:16:19.889 "raid_level": "raid5f", 00:16:19.889 "superblock": true, 00:16:19.889 "num_base_bdevs": 4, 00:16:19.889 "num_base_bdevs_discovered": 3, 00:16:19.889 "num_base_bdevs_operational": 3, 00:16:19.889 "base_bdevs_list": [ 00:16:19.889 { 00:16:19.889 "name": null, 00:16:19.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.889 "is_configured": false, 00:16:19.889 "data_offset": 2048, 00:16:19.889 "data_size": 63488 00:16:19.889 }, 00:16:19.889 { 00:16:19.889 "name": "pt2", 00:16:19.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:19.889 "is_configured": true, 00:16:19.889 "data_offset": 2048, 00:16:19.889 "data_size": 63488 00:16:19.889 }, 00:16:19.889 { 00:16:19.889 "name": "pt3", 00:16:19.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:19.889 "is_configured": true, 00:16:19.889 "data_offset": 2048, 00:16:19.889 "data_size": 63488 00:16:19.889 }, 00:16:19.889 { 00:16:19.889 "name": "pt4", 00:16:19.889 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.889 "is_configured": true, 00:16:19.889 "data_offset": 2048, 00:16:19.889 "data_size": 63488 00:16:19.889 } 00:16:19.889 ] 00:16:19.889 }' 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.889 21:47:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.459 [2024-09-29 21:47:39.205127] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.459 [2024-09-29 21:47:39.205154] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.459 [2024-09-29 21:47:39.205214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.459 [2024-09-29 21:47:39.205274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.459 [2024-09-29 21:47:39.205287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.459 [2024-09-29 21:47:39.273017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.459 [2024-09-29 21:47:39.273084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.459 [2024-09-29 21:47:39.273099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:20.459 [2024-09-29 21:47:39.273109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.459 [2024-09-29 21:47:39.275701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.459 [2024-09-29 21:47:39.275736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.459 [2024-09-29 21:47:39.275799] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:20.459 [2024-09-29 21:47:39.275854] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:20.459 [2024-09-29 21:47:39.275976] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:20.459 [2024-09-29 21:47:39.275990] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.459 [2024-09-29 21:47:39.276002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:20.459 [2024-09-29 21:47:39.276065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.459 [2024-09-29 21:47:39.276171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:20.459 pt1 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.459 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.459 "name": "raid_bdev1", 00:16:20.459 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:20.459 "strip_size_kb": 64, 00:16:20.459 "state": "configuring", 00:16:20.459 "raid_level": "raid5f", 00:16:20.459 "superblock": true, 00:16:20.459 "num_base_bdevs": 4, 00:16:20.459 "num_base_bdevs_discovered": 2, 00:16:20.459 "num_base_bdevs_operational": 3, 00:16:20.459 "base_bdevs_list": [ 00:16:20.459 { 00:16:20.459 "name": null, 00:16:20.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.459 "is_configured": false, 00:16:20.459 "data_offset": 2048, 00:16:20.459 "data_size": 63488 00:16:20.459 }, 00:16:20.459 { 00:16:20.459 "name": "pt2", 00:16:20.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.459 "is_configured": true, 00:16:20.459 "data_offset": 2048, 00:16:20.460 "data_size": 63488 00:16:20.460 }, 00:16:20.460 { 00:16:20.460 "name": "pt3", 00:16:20.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.460 "is_configured": true, 00:16:20.460 "data_offset": 2048, 00:16:20.460 "data_size": 63488 00:16:20.460 }, 00:16:20.460 { 00:16:20.460 "name": null, 00:16:20.460 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.460 "is_configured": false, 00:16:20.460 "data_offset": 2048, 00:16:20.460 "data_size": 63488 00:16:20.460 } 00:16:20.460 ] 00:16:20.460 }' 00:16:20.460 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.460 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.719 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:20.719 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:20.719 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.719 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.719 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.978 [2024-09-29 21:47:39.712263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:20.978 [2024-09-29 21:47:39.712304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.978 [2024-09-29 21:47:39.712323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:20.978 [2024-09-29 21:47:39.712331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.978 [2024-09-29 21:47:39.712655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.978 [2024-09-29 21:47:39.712672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:20.978 [2024-09-29 21:47:39.712727] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:20.978 [2024-09-29 21:47:39.712743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:20.978 [2024-09-29 21:47:39.712865] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:20.978 [2024-09-29 21:47:39.712872] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:20.978 [2024-09-29 21:47:39.713104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:20.978 [2024-09-29 21:47:39.719524] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:20.978 [2024-09-29 21:47:39.719551] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:20.978 [2024-09-29 21:47:39.719769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.978 pt4 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.978 "name": "raid_bdev1", 00:16:20.978 "uuid": "94519c6b-1012-49d2-9a55-667110f6f411", 00:16:20.978 "strip_size_kb": 64, 00:16:20.978 "state": "online", 00:16:20.978 "raid_level": "raid5f", 00:16:20.978 "superblock": true, 00:16:20.978 "num_base_bdevs": 4, 00:16:20.978 "num_base_bdevs_discovered": 3, 00:16:20.978 "num_base_bdevs_operational": 3, 00:16:20.978 "base_bdevs_list": [ 00:16:20.978 { 00:16:20.978 "name": null, 00:16:20.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.978 "is_configured": false, 00:16:20.978 "data_offset": 2048, 00:16:20.978 "data_size": 63488 00:16:20.978 }, 00:16:20.978 { 00:16:20.978 "name": "pt2", 00:16:20.978 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:20.978 "is_configured": true, 00:16:20.978 "data_offset": 2048, 00:16:20.978 "data_size": 63488 00:16:20.978 }, 00:16:20.978 { 00:16:20.978 "name": "pt3", 00:16:20.978 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:20.978 "is_configured": true, 00:16:20.978 "data_offset": 2048, 00:16:20.978 "data_size": 63488 00:16:20.978 }, 00:16:20.978 { 00:16:20.978 "name": "pt4", 00:16:20.978 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:20.978 "is_configured": true, 00:16:20.978 "data_offset": 2048, 00:16:20.978 "data_size": 63488 00:16:20.978 } 00:16:20.978 ] 00:16:20.978 }' 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.978 21:47:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.238 [2024-09-29 21:47:40.162968] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 94519c6b-1012-49d2-9a55-667110f6f411 '!=' 94519c6b-1012-49d2-9a55-667110f6f411 ']' 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84162 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84162 ']' 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84162 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.238 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84162 00:16:21.498 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.498 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.498 killing process with pid 84162 00:16:21.498 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84162' 00:16:21.498 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84162 00:16:21.498 [2024-09-29 21:47:40.245051] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.498 [2024-09-29 21:47:40.245127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.498 [2024-09-29 21:47:40.245192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.498 [2024-09-29 21:47:40.245212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:21.498 21:47:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84162 00:16:21.758 [2024-09-29 21:47:40.616086] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.146 21:47:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:23.146 00:16:23.146 real 0m8.340s 00:16:23.146 user 0m12.942s 00:16:23.146 sys 0m1.622s 00:16:23.146 21:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:23.147 21:47:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.147 ************************************ 00:16:23.147 END TEST raid5f_superblock_test 00:16:23.147 ************************************ 00:16:23.147 21:47:41 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:23.147 21:47:41 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:23.147 21:47:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:23.147 21:47:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:23.147 21:47:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.147 ************************************ 00:16:23.147 START TEST raid5f_rebuild_test 00:16:23.147 ************************************ 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84642 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84642 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84642 ']' 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.147 21:47:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.147 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:23.147 Zero copy mechanism will not be used. 00:16:23.147 [2024-09-29 21:47:41.995229] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:23.147 [2024-09-29 21:47:41.995329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84642 ] 00:16:23.417 [2024-09-29 21:47:42.156244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.417 [2024-09-29 21:47:42.351222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.757 [2024-09-29 21:47:42.535351] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.757 [2024-09-29 21:47:42.535385] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.021 BaseBdev1_malloc 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.021 [2024-09-29 21:47:42.854684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:24.021 [2024-09-29 21:47:42.854753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.021 [2024-09-29 21:47:42.854775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:24.021 [2024-09-29 21:47:42.854789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.021 [2024-09-29 21:47:42.856710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.021 [2024-09-29 21:47:42.856745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.021 BaseBdev1 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.021 BaseBdev2_malloc 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.021 [2024-09-29 21:47:42.940562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:24.021 [2024-09-29 21:47:42.940618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.021 [2024-09-29 21:47:42.940636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:24.021 [2024-09-29 21:47:42.940649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.021 [2024-09-29 21:47:42.942617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.021 [2024-09-29 21:47:42.942651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:24.021 BaseBdev2 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.021 BaseBdev3_malloc 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.021 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.022 [2024-09-29 21:47:42.994054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:24.022 [2024-09-29 21:47:42.994101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.022 [2024-09-29 21:47:42.994119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:24.022 [2024-09-29 21:47:42.994129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.022 [2024-09-29 21:47:42.995950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.022 [2024-09-29 21:47:42.995985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:24.022 BaseBdev3 00:16:24.022 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.022 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:24.022 21:47:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:24.022 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.022 21:47:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.281 BaseBdev4_malloc 00:16:24.281 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.281 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.282 [2024-09-29 21:47:43.049342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:24.282 [2024-09-29 21:47:43.049392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.282 [2024-09-29 21:47:43.049407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:24.282 [2024-09-29 21:47:43.049417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.282 [2024-09-29 21:47:43.051275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.282 [2024-09-29 21:47:43.051309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:24.282 BaseBdev4 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.282 spare_malloc 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.282 spare_delay 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.282 [2024-09-29 21:47:43.110428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:24.282 [2024-09-29 21:47:43.110480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.282 [2024-09-29 21:47:43.110497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:24.282 [2024-09-29 21:47:43.110507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.282 [2024-09-29 21:47:43.112416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.282 [2024-09-29 21:47:43.112449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:24.282 spare 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.282 [2024-09-29 21:47:43.122465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.282 [2024-09-29 21:47:43.124091] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.282 [2024-09-29 21:47:43.124159] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.282 [2024-09-29 21:47:43.124206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:24.282 [2024-09-29 21:47:43.124285] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:24.282 [2024-09-29 21:47:43.124295] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:24.282 [2024-09-29 21:47:43.124521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:24.282 [2024-09-29 21:47:43.131341] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:24.282 [2024-09-29 21:47:43.131362] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:24.282 [2024-09-29 21:47:43.131522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.282 "name": "raid_bdev1", 00:16:24.282 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:24.282 "strip_size_kb": 64, 00:16:24.282 "state": "online", 00:16:24.282 "raid_level": "raid5f", 00:16:24.282 "superblock": false, 00:16:24.282 "num_base_bdevs": 4, 00:16:24.282 "num_base_bdevs_discovered": 4, 00:16:24.282 "num_base_bdevs_operational": 4, 00:16:24.282 "base_bdevs_list": [ 00:16:24.282 { 00:16:24.282 "name": "BaseBdev1", 00:16:24.282 "uuid": "0d4c6b56-545f-530c-8113-a6ea5a2418bc", 00:16:24.282 "is_configured": true, 00:16:24.282 "data_offset": 0, 00:16:24.282 "data_size": 65536 00:16:24.282 }, 00:16:24.282 { 00:16:24.282 "name": "BaseBdev2", 00:16:24.282 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:24.282 "is_configured": true, 00:16:24.282 "data_offset": 0, 00:16:24.282 "data_size": 65536 00:16:24.282 }, 00:16:24.282 { 00:16:24.282 "name": "BaseBdev3", 00:16:24.282 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:24.282 "is_configured": true, 00:16:24.282 "data_offset": 0, 00:16:24.282 "data_size": 65536 00:16:24.282 }, 00:16:24.282 { 00:16:24.282 "name": "BaseBdev4", 00:16:24.282 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:24.282 "is_configured": true, 00:16:24.282 "data_offset": 0, 00:16:24.282 "data_size": 65536 00:16:24.282 } 00:16:24.282 ] 00:16:24.282 }' 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.282 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:24.851 [2024-09-29 21:47:43.582381] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:24.851 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:25.111 [2024-09-29 21:47:43.857797] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:25.111 /dev/nbd0 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:25.111 1+0 records in 00:16:25.111 1+0 records out 00:16:25.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450566 s, 9.1 MB/s 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:25.111 21:47:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:25.680 512+0 records in 00:16:25.680 512+0 records out 00:16:25.680 100663296 bytes (101 MB, 96 MiB) copied, 0.562592 s, 179 MB/s 00:16:25.680 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:25.680 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:25.680 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:25.680 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:25.680 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:25.680 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.680 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:25.940 [2024-09-29 21:47:44.731334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.940 [2024-09-29 21:47:44.742054] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.940 "name": "raid_bdev1", 00:16:25.940 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:25.940 "strip_size_kb": 64, 00:16:25.940 "state": "online", 00:16:25.940 "raid_level": "raid5f", 00:16:25.940 "superblock": false, 00:16:25.940 "num_base_bdevs": 4, 00:16:25.940 "num_base_bdevs_discovered": 3, 00:16:25.940 "num_base_bdevs_operational": 3, 00:16:25.940 "base_bdevs_list": [ 00:16:25.940 { 00:16:25.940 "name": null, 00:16:25.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.940 "is_configured": false, 00:16:25.940 "data_offset": 0, 00:16:25.940 "data_size": 65536 00:16:25.940 }, 00:16:25.940 { 00:16:25.940 "name": "BaseBdev2", 00:16:25.940 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:25.940 "is_configured": true, 00:16:25.940 "data_offset": 0, 00:16:25.940 "data_size": 65536 00:16:25.940 }, 00:16:25.940 { 00:16:25.940 "name": "BaseBdev3", 00:16:25.940 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:25.940 "is_configured": true, 00:16:25.940 "data_offset": 0, 00:16:25.940 "data_size": 65536 00:16:25.940 }, 00:16:25.940 { 00:16:25.940 "name": "BaseBdev4", 00:16:25.940 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:25.940 "is_configured": true, 00:16:25.940 "data_offset": 0, 00:16:25.940 "data_size": 65536 00:16:25.940 } 00:16:25.940 ] 00:16:25.940 }' 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.940 21:47:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.509 21:47:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:26.509 21:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.509 21:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.509 [2024-09-29 21:47:45.217189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.509 [2024-09-29 21:47:45.232387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:26.509 21:47:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.509 21:47:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:26.509 [2024-09-29 21:47:45.241859] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.447 "name": "raid_bdev1", 00:16:27.447 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:27.447 "strip_size_kb": 64, 00:16:27.447 "state": "online", 00:16:27.447 "raid_level": "raid5f", 00:16:27.447 "superblock": false, 00:16:27.447 "num_base_bdevs": 4, 00:16:27.447 "num_base_bdevs_discovered": 4, 00:16:27.447 "num_base_bdevs_operational": 4, 00:16:27.447 "process": { 00:16:27.447 "type": "rebuild", 00:16:27.447 "target": "spare", 00:16:27.447 "progress": { 00:16:27.447 "blocks": 19200, 00:16:27.447 "percent": 9 00:16:27.447 } 00:16:27.447 }, 00:16:27.447 "base_bdevs_list": [ 00:16:27.447 { 00:16:27.447 "name": "spare", 00:16:27.447 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:27.447 "is_configured": true, 00:16:27.447 "data_offset": 0, 00:16:27.447 "data_size": 65536 00:16:27.447 }, 00:16:27.447 { 00:16:27.447 "name": "BaseBdev2", 00:16:27.447 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:27.447 "is_configured": true, 00:16:27.447 "data_offset": 0, 00:16:27.447 "data_size": 65536 00:16:27.447 }, 00:16:27.447 { 00:16:27.447 "name": "BaseBdev3", 00:16:27.447 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:27.447 "is_configured": true, 00:16:27.447 "data_offset": 0, 00:16:27.447 "data_size": 65536 00:16:27.447 }, 00:16:27.447 { 00:16:27.447 "name": "BaseBdev4", 00:16:27.447 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:27.447 "is_configured": true, 00:16:27.447 "data_offset": 0, 00:16:27.447 "data_size": 65536 00:16:27.447 } 00:16:27.447 ] 00:16:27.447 }' 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.447 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.447 [2024-09-29 21:47:46.397033] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.707 [2024-09-29 21:47:46.449108] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:27.707 [2024-09-29 21:47:46.449170] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.707 [2024-09-29 21:47:46.449186] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.707 [2024-09-29 21:47:46.449198] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.707 "name": "raid_bdev1", 00:16:27.707 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:27.707 "strip_size_kb": 64, 00:16:27.707 "state": "online", 00:16:27.707 "raid_level": "raid5f", 00:16:27.707 "superblock": false, 00:16:27.707 "num_base_bdevs": 4, 00:16:27.707 "num_base_bdevs_discovered": 3, 00:16:27.707 "num_base_bdevs_operational": 3, 00:16:27.707 "base_bdevs_list": [ 00:16:27.707 { 00:16:27.707 "name": null, 00:16:27.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.707 "is_configured": false, 00:16:27.707 "data_offset": 0, 00:16:27.707 "data_size": 65536 00:16:27.707 }, 00:16:27.707 { 00:16:27.707 "name": "BaseBdev2", 00:16:27.707 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:27.707 "is_configured": true, 00:16:27.707 "data_offset": 0, 00:16:27.707 "data_size": 65536 00:16:27.707 }, 00:16:27.707 { 00:16:27.707 "name": "BaseBdev3", 00:16:27.707 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:27.707 "is_configured": true, 00:16:27.707 "data_offset": 0, 00:16:27.707 "data_size": 65536 00:16:27.707 }, 00:16:27.707 { 00:16:27.707 "name": "BaseBdev4", 00:16:27.707 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:27.707 "is_configured": true, 00:16:27.707 "data_offset": 0, 00:16:27.707 "data_size": 65536 00:16:27.707 } 00:16:27.707 ] 00:16:27.707 }' 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.707 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.967 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.967 "name": "raid_bdev1", 00:16:27.967 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:27.967 "strip_size_kb": 64, 00:16:27.967 "state": "online", 00:16:27.967 "raid_level": "raid5f", 00:16:27.967 "superblock": false, 00:16:27.967 "num_base_bdevs": 4, 00:16:27.967 "num_base_bdevs_discovered": 3, 00:16:27.967 "num_base_bdevs_operational": 3, 00:16:27.967 "base_bdevs_list": [ 00:16:27.967 { 00:16:27.967 "name": null, 00:16:27.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.967 "is_configured": false, 00:16:27.967 "data_offset": 0, 00:16:27.967 "data_size": 65536 00:16:27.967 }, 00:16:27.967 { 00:16:27.967 "name": "BaseBdev2", 00:16:27.967 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:27.967 "is_configured": true, 00:16:27.967 "data_offset": 0, 00:16:27.967 "data_size": 65536 00:16:27.967 }, 00:16:27.967 { 00:16:27.967 "name": "BaseBdev3", 00:16:27.967 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:27.967 "is_configured": true, 00:16:27.967 "data_offset": 0, 00:16:27.967 "data_size": 65536 00:16:27.967 }, 00:16:27.967 { 00:16:27.967 "name": "BaseBdev4", 00:16:27.967 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:27.967 "is_configured": true, 00:16:27.967 "data_offset": 0, 00:16:27.967 "data_size": 65536 00:16:27.967 } 00:16:27.967 ] 00:16:27.967 }' 00:16:28.227 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.227 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:28.227 21:47:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.227 21:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:28.227 21:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:28.227 21:47:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.227 21:47:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.227 [2024-09-29 21:47:47.031203] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.227 [2024-09-29 21:47:47.045642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:28.227 21:47:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.227 21:47:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:28.227 [2024-09-29 21:47:47.054858] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.166 "name": "raid_bdev1", 00:16:29.166 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:29.166 "strip_size_kb": 64, 00:16:29.166 "state": "online", 00:16:29.166 "raid_level": "raid5f", 00:16:29.166 "superblock": false, 00:16:29.166 "num_base_bdevs": 4, 00:16:29.166 "num_base_bdevs_discovered": 4, 00:16:29.166 "num_base_bdevs_operational": 4, 00:16:29.166 "process": { 00:16:29.166 "type": "rebuild", 00:16:29.166 "target": "spare", 00:16:29.166 "progress": { 00:16:29.166 "blocks": 19200, 00:16:29.166 "percent": 9 00:16:29.166 } 00:16:29.166 }, 00:16:29.166 "base_bdevs_list": [ 00:16:29.166 { 00:16:29.166 "name": "spare", 00:16:29.166 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:29.166 "is_configured": true, 00:16:29.166 "data_offset": 0, 00:16:29.166 "data_size": 65536 00:16:29.166 }, 00:16:29.166 { 00:16:29.166 "name": "BaseBdev2", 00:16:29.166 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:29.166 "is_configured": true, 00:16:29.166 "data_offset": 0, 00:16:29.166 "data_size": 65536 00:16:29.166 }, 00:16:29.166 { 00:16:29.166 "name": "BaseBdev3", 00:16:29.166 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:29.166 "is_configured": true, 00:16:29.166 "data_offset": 0, 00:16:29.166 "data_size": 65536 00:16:29.166 }, 00:16:29.166 { 00:16:29.166 "name": "BaseBdev4", 00:16:29.166 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:29.166 "is_configured": true, 00:16:29.166 "data_offset": 0, 00:16:29.166 "data_size": 65536 00:16:29.166 } 00:16:29.166 ] 00:16:29.166 }' 00:16:29.166 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=624 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.427 "name": "raid_bdev1", 00:16:29.427 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:29.427 "strip_size_kb": 64, 00:16:29.427 "state": "online", 00:16:29.427 "raid_level": "raid5f", 00:16:29.427 "superblock": false, 00:16:29.427 "num_base_bdevs": 4, 00:16:29.427 "num_base_bdevs_discovered": 4, 00:16:29.427 "num_base_bdevs_operational": 4, 00:16:29.427 "process": { 00:16:29.427 "type": "rebuild", 00:16:29.427 "target": "spare", 00:16:29.427 "progress": { 00:16:29.427 "blocks": 21120, 00:16:29.427 "percent": 10 00:16:29.427 } 00:16:29.427 }, 00:16:29.427 "base_bdevs_list": [ 00:16:29.427 { 00:16:29.427 "name": "spare", 00:16:29.427 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:29.427 "is_configured": true, 00:16:29.427 "data_offset": 0, 00:16:29.427 "data_size": 65536 00:16:29.427 }, 00:16:29.427 { 00:16:29.427 "name": "BaseBdev2", 00:16:29.427 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:29.427 "is_configured": true, 00:16:29.427 "data_offset": 0, 00:16:29.427 "data_size": 65536 00:16:29.427 }, 00:16:29.427 { 00:16:29.427 "name": "BaseBdev3", 00:16:29.427 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:29.427 "is_configured": true, 00:16:29.427 "data_offset": 0, 00:16:29.427 "data_size": 65536 00:16:29.427 }, 00:16:29.427 { 00:16:29.427 "name": "BaseBdev4", 00:16:29.427 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:29.427 "is_configured": true, 00:16:29.427 "data_offset": 0, 00:16:29.427 "data_size": 65536 00:16:29.427 } 00:16:29.427 ] 00:16:29.427 }' 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.427 21:47:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.367 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.367 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.367 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.367 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.367 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.367 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.628 "name": "raid_bdev1", 00:16:30.628 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:30.628 "strip_size_kb": 64, 00:16:30.628 "state": "online", 00:16:30.628 "raid_level": "raid5f", 00:16:30.628 "superblock": false, 00:16:30.628 "num_base_bdevs": 4, 00:16:30.628 "num_base_bdevs_discovered": 4, 00:16:30.628 "num_base_bdevs_operational": 4, 00:16:30.628 "process": { 00:16:30.628 "type": "rebuild", 00:16:30.628 "target": "spare", 00:16:30.628 "progress": { 00:16:30.628 "blocks": 42240, 00:16:30.628 "percent": 21 00:16:30.628 } 00:16:30.628 }, 00:16:30.628 "base_bdevs_list": [ 00:16:30.628 { 00:16:30.628 "name": "spare", 00:16:30.628 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:30.628 "is_configured": true, 00:16:30.628 "data_offset": 0, 00:16:30.628 "data_size": 65536 00:16:30.628 }, 00:16:30.628 { 00:16:30.628 "name": "BaseBdev2", 00:16:30.628 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:30.628 "is_configured": true, 00:16:30.628 "data_offset": 0, 00:16:30.628 "data_size": 65536 00:16:30.628 }, 00:16:30.628 { 00:16:30.628 "name": "BaseBdev3", 00:16:30.628 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:30.628 "is_configured": true, 00:16:30.628 "data_offset": 0, 00:16:30.628 "data_size": 65536 00:16:30.628 }, 00:16:30.628 { 00:16:30.628 "name": "BaseBdev4", 00:16:30.628 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:30.628 "is_configured": true, 00:16:30.628 "data_offset": 0, 00:16:30.628 "data_size": 65536 00:16:30.628 } 00:16:30.628 ] 00:16:30.628 }' 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.628 21:47:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.570 "name": "raid_bdev1", 00:16:31.570 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:31.570 "strip_size_kb": 64, 00:16:31.570 "state": "online", 00:16:31.570 "raid_level": "raid5f", 00:16:31.570 "superblock": false, 00:16:31.570 "num_base_bdevs": 4, 00:16:31.570 "num_base_bdevs_discovered": 4, 00:16:31.570 "num_base_bdevs_operational": 4, 00:16:31.570 "process": { 00:16:31.570 "type": "rebuild", 00:16:31.570 "target": "spare", 00:16:31.570 "progress": { 00:16:31.570 "blocks": 65280, 00:16:31.570 "percent": 33 00:16:31.570 } 00:16:31.570 }, 00:16:31.570 "base_bdevs_list": [ 00:16:31.570 { 00:16:31.570 "name": "spare", 00:16:31.570 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:31.570 "is_configured": true, 00:16:31.570 "data_offset": 0, 00:16:31.570 "data_size": 65536 00:16:31.570 }, 00:16:31.570 { 00:16:31.570 "name": "BaseBdev2", 00:16:31.570 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:31.570 "is_configured": true, 00:16:31.570 "data_offset": 0, 00:16:31.570 "data_size": 65536 00:16:31.570 }, 00:16:31.570 { 00:16:31.570 "name": "BaseBdev3", 00:16:31.570 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:31.570 "is_configured": true, 00:16:31.570 "data_offset": 0, 00:16:31.570 "data_size": 65536 00:16:31.570 }, 00:16:31.570 { 00:16:31.570 "name": "BaseBdev4", 00:16:31.570 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:31.570 "is_configured": true, 00:16:31.570 "data_offset": 0, 00:16:31.570 "data_size": 65536 00:16:31.570 } 00:16:31.570 ] 00:16:31.570 }' 00:16:31.570 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.830 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:31.830 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.830 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:31.830 21:47:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:32.771 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.772 "name": "raid_bdev1", 00:16:32.772 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:32.772 "strip_size_kb": 64, 00:16:32.772 "state": "online", 00:16:32.772 "raid_level": "raid5f", 00:16:32.772 "superblock": false, 00:16:32.772 "num_base_bdevs": 4, 00:16:32.772 "num_base_bdevs_discovered": 4, 00:16:32.772 "num_base_bdevs_operational": 4, 00:16:32.772 "process": { 00:16:32.772 "type": "rebuild", 00:16:32.772 "target": "spare", 00:16:32.772 "progress": { 00:16:32.772 "blocks": 86400, 00:16:32.772 "percent": 43 00:16:32.772 } 00:16:32.772 }, 00:16:32.772 "base_bdevs_list": [ 00:16:32.772 { 00:16:32.772 "name": "spare", 00:16:32.772 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:32.772 "is_configured": true, 00:16:32.772 "data_offset": 0, 00:16:32.772 "data_size": 65536 00:16:32.772 }, 00:16:32.772 { 00:16:32.772 "name": "BaseBdev2", 00:16:32.772 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:32.772 "is_configured": true, 00:16:32.772 "data_offset": 0, 00:16:32.772 "data_size": 65536 00:16:32.772 }, 00:16:32.772 { 00:16:32.772 "name": "BaseBdev3", 00:16:32.772 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:32.772 "is_configured": true, 00:16:32.772 "data_offset": 0, 00:16:32.772 "data_size": 65536 00:16:32.772 }, 00:16:32.772 { 00:16:32.772 "name": "BaseBdev4", 00:16:32.772 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:32.772 "is_configured": true, 00:16:32.772 "data_offset": 0, 00:16:32.772 "data_size": 65536 00:16:32.772 } 00:16:32.772 ] 00:16:32.772 }' 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.772 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.032 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.032 21:47:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.972 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.972 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.972 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.972 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.972 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.972 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.972 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.972 21:47:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.973 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.973 21:47:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.973 21:47:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.973 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.973 "name": "raid_bdev1", 00:16:33.973 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:33.973 "strip_size_kb": 64, 00:16:33.973 "state": "online", 00:16:33.973 "raid_level": "raid5f", 00:16:33.973 "superblock": false, 00:16:33.973 "num_base_bdevs": 4, 00:16:33.973 "num_base_bdevs_discovered": 4, 00:16:33.973 "num_base_bdevs_operational": 4, 00:16:33.973 "process": { 00:16:33.973 "type": "rebuild", 00:16:33.973 "target": "spare", 00:16:33.973 "progress": { 00:16:33.973 "blocks": 109440, 00:16:33.973 "percent": 55 00:16:33.973 } 00:16:33.973 }, 00:16:33.973 "base_bdevs_list": [ 00:16:33.973 { 00:16:33.973 "name": "spare", 00:16:33.973 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:33.973 "is_configured": true, 00:16:33.973 "data_offset": 0, 00:16:33.973 "data_size": 65536 00:16:33.973 }, 00:16:33.973 { 00:16:33.973 "name": "BaseBdev2", 00:16:33.973 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:33.973 "is_configured": true, 00:16:33.973 "data_offset": 0, 00:16:33.973 "data_size": 65536 00:16:33.973 }, 00:16:33.973 { 00:16:33.973 "name": "BaseBdev3", 00:16:33.973 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:33.973 "is_configured": true, 00:16:33.973 "data_offset": 0, 00:16:33.973 "data_size": 65536 00:16:33.973 }, 00:16:33.973 { 00:16:33.973 "name": "BaseBdev4", 00:16:33.973 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:33.973 "is_configured": true, 00:16:33.973 "data_offset": 0, 00:16:33.973 "data_size": 65536 00:16:33.973 } 00:16:33.973 ] 00:16:33.973 }' 00:16:33.973 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.973 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.973 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.973 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.973 21:47:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.356 "name": "raid_bdev1", 00:16:35.356 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:35.356 "strip_size_kb": 64, 00:16:35.356 "state": "online", 00:16:35.356 "raid_level": "raid5f", 00:16:35.356 "superblock": false, 00:16:35.356 "num_base_bdevs": 4, 00:16:35.356 "num_base_bdevs_discovered": 4, 00:16:35.356 "num_base_bdevs_operational": 4, 00:16:35.356 "process": { 00:16:35.356 "type": "rebuild", 00:16:35.356 "target": "spare", 00:16:35.356 "progress": { 00:16:35.356 "blocks": 130560, 00:16:35.356 "percent": 66 00:16:35.356 } 00:16:35.356 }, 00:16:35.356 "base_bdevs_list": [ 00:16:35.356 { 00:16:35.356 "name": "spare", 00:16:35.356 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:35.356 "is_configured": true, 00:16:35.356 "data_offset": 0, 00:16:35.356 "data_size": 65536 00:16:35.356 }, 00:16:35.356 { 00:16:35.356 "name": "BaseBdev2", 00:16:35.356 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:35.356 "is_configured": true, 00:16:35.356 "data_offset": 0, 00:16:35.356 "data_size": 65536 00:16:35.356 }, 00:16:35.356 { 00:16:35.356 "name": "BaseBdev3", 00:16:35.356 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:35.356 "is_configured": true, 00:16:35.356 "data_offset": 0, 00:16:35.356 "data_size": 65536 00:16:35.356 }, 00:16:35.356 { 00:16:35.356 "name": "BaseBdev4", 00:16:35.356 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:35.356 "is_configured": true, 00:16:35.356 "data_offset": 0, 00:16:35.356 "data_size": 65536 00:16:35.356 } 00:16:35.356 ] 00:16:35.356 }' 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.356 21:47:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.356 21:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.356 21:47:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.296 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.296 "name": "raid_bdev1", 00:16:36.296 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:36.296 "strip_size_kb": 64, 00:16:36.296 "state": "online", 00:16:36.296 "raid_level": "raid5f", 00:16:36.296 "superblock": false, 00:16:36.296 "num_base_bdevs": 4, 00:16:36.296 "num_base_bdevs_discovered": 4, 00:16:36.296 "num_base_bdevs_operational": 4, 00:16:36.296 "process": { 00:16:36.296 "type": "rebuild", 00:16:36.296 "target": "spare", 00:16:36.296 "progress": { 00:16:36.296 "blocks": 151680, 00:16:36.296 "percent": 77 00:16:36.296 } 00:16:36.296 }, 00:16:36.296 "base_bdevs_list": [ 00:16:36.296 { 00:16:36.296 "name": "spare", 00:16:36.296 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:36.296 "is_configured": true, 00:16:36.296 "data_offset": 0, 00:16:36.296 "data_size": 65536 00:16:36.296 }, 00:16:36.296 { 00:16:36.296 "name": "BaseBdev2", 00:16:36.296 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:36.296 "is_configured": true, 00:16:36.296 "data_offset": 0, 00:16:36.296 "data_size": 65536 00:16:36.296 }, 00:16:36.296 { 00:16:36.296 "name": "BaseBdev3", 00:16:36.296 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:36.296 "is_configured": true, 00:16:36.296 "data_offset": 0, 00:16:36.296 "data_size": 65536 00:16:36.296 }, 00:16:36.296 { 00:16:36.296 "name": "BaseBdev4", 00:16:36.297 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:36.297 "is_configured": true, 00:16:36.297 "data_offset": 0, 00:16:36.297 "data_size": 65536 00:16:36.297 } 00:16:36.297 ] 00:16:36.297 }' 00:16:36.297 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.297 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.297 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.297 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.297 21:47:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.235 21:47:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.494 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.494 "name": "raid_bdev1", 00:16:37.494 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:37.494 "strip_size_kb": 64, 00:16:37.494 "state": "online", 00:16:37.494 "raid_level": "raid5f", 00:16:37.494 "superblock": false, 00:16:37.494 "num_base_bdevs": 4, 00:16:37.494 "num_base_bdevs_discovered": 4, 00:16:37.494 "num_base_bdevs_operational": 4, 00:16:37.494 "process": { 00:16:37.494 "type": "rebuild", 00:16:37.494 "target": "spare", 00:16:37.494 "progress": { 00:16:37.494 "blocks": 174720, 00:16:37.494 "percent": 88 00:16:37.494 } 00:16:37.495 }, 00:16:37.495 "base_bdevs_list": [ 00:16:37.495 { 00:16:37.495 "name": "spare", 00:16:37.495 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:37.495 "is_configured": true, 00:16:37.495 "data_offset": 0, 00:16:37.495 "data_size": 65536 00:16:37.495 }, 00:16:37.495 { 00:16:37.495 "name": "BaseBdev2", 00:16:37.495 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:37.495 "is_configured": true, 00:16:37.495 "data_offset": 0, 00:16:37.495 "data_size": 65536 00:16:37.495 }, 00:16:37.495 { 00:16:37.495 "name": "BaseBdev3", 00:16:37.495 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:37.495 "is_configured": true, 00:16:37.495 "data_offset": 0, 00:16:37.495 "data_size": 65536 00:16:37.495 }, 00:16:37.495 { 00:16:37.495 "name": "BaseBdev4", 00:16:37.495 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:37.495 "is_configured": true, 00:16:37.495 "data_offset": 0, 00:16:37.495 "data_size": 65536 00:16:37.495 } 00:16:37.495 ] 00:16:37.495 }' 00:16:37.495 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.495 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.495 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.495 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.495 21:47:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.434 "name": "raid_bdev1", 00:16:38.434 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:38.434 "strip_size_kb": 64, 00:16:38.434 "state": "online", 00:16:38.434 "raid_level": "raid5f", 00:16:38.434 "superblock": false, 00:16:38.434 "num_base_bdevs": 4, 00:16:38.434 "num_base_bdevs_discovered": 4, 00:16:38.434 "num_base_bdevs_operational": 4, 00:16:38.434 "process": { 00:16:38.434 "type": "rebuild", 00:16:38.434 "target": "spare", 00:16:38.434 "progress": { 00:16:38.434 "blocks": 195840, 00:16:38.434 "percent": 99 00:16:38.434 } 00:16:38.434 }, 00:16:38.434 "base_bdevs_list": [ 00:16:38.434 { 00:16:38.434 "name": "spare", 00:16:38.434 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:38.434 "is_configured": true, 00:16:38.434 "data_offset": 0, 00:16:38.434 "data_size": 65536 00:16:38.434 }, 00:16:38.434 { 00:16:38.434 "name": "BaseBdev2", 00:16:38.434 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:38.434 "is_configured": true, 00:16:38.434 "data_offset": 0, 00:16:38.434 "data_size": 65536 00:16:38.434 }, 00:16:38.434 { 00:16:38.434 "name": "BaseBdev3", 00:16:38.434 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:38.434 "is_configured": true, 00:16:38.434 "data_offset": 0, 00:16:38.434 "data_size": 65536 00:16:38.434 }, 00:16:38.434 { 00:16:38.434 "name": "BaseBdev4", 00:16:38.434 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:38.434 "is_configured": true, 00:16:38.434 "data_offset": 0, 00:16:38.434 "data_size": 65536 00:16:38.434 } 00:16:38.434 ] 00:16:38.434 }' 00:16:38.434 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.434 [2024-09-29 21:47:57.398681] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:38.434 [2024-09-29 21:47:57.398750] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:38.434 [2024-09-29 21:47:57.398796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.694 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.694 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.694 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.694 21:47:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.633 "name": "raid_bdev1", 00:16:39.633 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:39.633 "strip_size_kb": 64, 00:16:39.633 "state": "online", 00:16:39.633 "raid_level": "raid5f", 00:16:39.633 "superblock": false, 00:16:39.633 "num_base_bdevs": 4, 00:16:39.633 "num_base_bdevs_discovered": 4, 00:16:39.633 "num_base_bdevs_operational": 4, 00:16:39.633 "base_bdevs_list": [ 00:16:39.633 { 00:16:39.633 "name": "spare", 00:16:39.633 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:39.633 "is_configured": true, 00:16:39.633 "data_offset": 0, 00:16:39.633 "data_size": 65536 00:16:39.633 }, 00:16:39.633 { 00:16:39.633 "name": "BaseBdev2", 00:16:39.633 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:39.633 "is_configured": true, 00:16:39.633 "data_offset": 0, 00:16:39.633 "data_size": 65536 00:16:39.633 }, 00:16:39.633 { 00:16:39.633 "name": "BaseBdev3", 00:16:39.633 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:39.633 "is_configured": true, 00:16:39.633 "data_offset": 0, 00:16:39.633 "data_size": 65536 00:16:39.633 }, 00:16:39.633 { 00:16:39.633 "name": "BaseBdev4", 00:16:39.633 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:39.633 "is_configured": true, 00:16:39.633 "data_offset": 0, 00:16:39.633 "data_size": 65536 00:16:39.633 } 00:16:39.633 ] 00:16:39.633 }' 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:39.633 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.893 "name": "raid_bdev1", 00:16:39.893 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:39.893 "strip_size_kb": 64, 00:16:39.893 "state": "online", 00:16:39.893 "raid_level": "raid5f", 00:16:39.893 "superblock": false, 00:16:39.893 "num_base_bdevs": 4, 00:16:39.893 "num_base_bdevs_discovered": 4, 00:16:39.893 "num_base_bdevs_operational": 4, 00:16:39.893 "base_bdevs_list": [ 00:16:39.893 { 00:16:39.893 "name": "spare", 00:16:39.893 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:39.893 "is_configured": true, 00:16:39.893 "data_offset": 0, 00:16:39.893 "data_size": 65536 00:16:39.893 }, 00:16:39.893 { 00:16:39.893 "name": "BaseBdev2", 00:16:39.893 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:39.893 "is_configured": true, 00:16:39.893 "data_offset": 0, 00:16:39.893 "data_size": 65536 00:16:39.893 }, 00:16:39.893 { 00:16:39.893 "name": "BaseBdev3", 00:16:39.893 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:39.893 "is_configured": true, 00:16:39.893 "data_offset": 0, 00:16:39.893 "data_size": 65536 00:16:39.893 }, 00:16:39.893 { 00:16:39.893 "name": "BaseBdev4", 00:16:39.893 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:39.893 "is_configured": true, 00:16:39.893 "data_offset": 0, 00:16:39.893 "data_size": 65536 00:16:39.893 } 00:16:39.893 ] 00:16:39.893 }' 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.893 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.893 "name": "raid_bdev1", 00:16:39.893 "uuid": "6887a5ee-297e-450c-bad3-b4133adc9773", 00:16:39.893 "strip_size_kb": 64, 00:16:39.893 "state": "online", 00:16:39.893 "raid_level": "raid5f", 00:16:39.893 "superblock": false, 00:16:39.893 "num_base_bdevs": 4, 00:16:39.893 "num_base_bdevs_discovered": 4, 00:16:39.893 "num_base_bdevs_operational": 4, 00:16:39.893 "base_bdevs_list": [ 00:16:39.893 { 00:16:39.893 "name": "spare", 00:16:39.893 "uuid": "62d552b6-9201-5648-9fef-cec9863d5dbe", 00:16:39.893 "is_configured": true, 00:16:39.893 "data_offset": 0, 00:16:39.893 "data_size": 65536 00:16:39.893 }, 00:16:39.893 { 00:16:39.893 "name": "BaseBdev2", 00:16:39.893 "uuid": "1451f63d-c8d7-5b90-a06e-b389bb744f2a", 00:16:39.893 "is_configured": true, 00:16:39.893 "data_offset": 0, 00:16:39.893 "data_size": 65536 00:16:39.893 }, 00:16:39.893 { 00:16:39.893 "name": "BaseBdev3", 00:16:39.893 "uuid": "1087e074-37e2-505b-b5e7-9c62b5fc422a", 00:16:39.894 "is_configured": true, 00:16:39.894 "data_offset": 0, 00:16:39.894 "data_size": 65536 00:16:39.894 }, 00:16:39.894 { 00:16:39.894 "name": "BaseBdev4", 00:16:39.894 "uuid": "6f24af1f-8aa8-51ff-9c08-ab0084c67261", 00:16:39.894 "is_configured": true, 00:16:39.894 "data_offset": 0, 00:16:39.894 "data_size": 65536 00:16:39.894 } 00:16:39.894 ] 00:16:39.894 }' 00:16:39.894 21:47:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.894 21:47:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.463 [2024-09-29 21:47:59.279640] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.463 [2024-09-29 21:47:59.279672] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.463 [2024-09-29 21:47:59.279756] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.463 [2024-09-29 21:47:59.279846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.463 [2024-09-29 21:47:59.279861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.463 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:40.724 /dev/nbd0 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.724 1+0 records in 00:16:40.724 1+0 records out 00:16:40.724 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031474 s, 13.0 MB/s 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.724 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:40.984 /dev/nbd1 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:40.984 1+0 records in 00:16:40.984 1+0 records out 00:16:40.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416115 s, 9.8 MB/s 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:40.984 21:47:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:41.244 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:41.244 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.244 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:41.244 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:41.244 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:41.244 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.244 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84642 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84642 ']' 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84642 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.504 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84642 00:16:41.764 killing process with pid 84642 00:16:41.764 Received shutdown signal, test time was about 60.000000 seconds 00:16:41.764 00:16:41.764 Latency(us) 00:16:41.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.764 =================================================================================================================== 00:16:41.764 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:41.764 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.764 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.764 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84642' 00:16:41.764 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84642 00:16:41.764 [2024-09-29 21:48:00.492321] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.764 21:48:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84642 00:16:42.024 [2024-09-29 21:48:00.936582] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:43.404 00:16:43.404 real 0m20.208s 00:16:43.404 user 0m24.139s 00:16:43.404 sys 0m2.401s 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.404 ************************************ 00:16:43.404 END TEST raid5f_rebuild_test 00:16:43.404 ************************************ 00:16:43.404 21:48:02 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:43.404 21:48:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:43.404 21:48:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.404 21:48:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:43.404 ************************************ 00:16:43.404 START TEST raid5f_rebuild_test_sb 00:16:43.404 ************************************ 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85164 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85164 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85164 ']' 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.404 21:48:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.404 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:43.404 Zero copy mechanism will not be used. 00:16:43.404 [2024-09-29 21:48:02.286305] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:43.404 [2024-09-29 21:48:02.286428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85164 ] 00:16:43.664 [2024-09-29 21:48:02.454931] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.664 [2024-09-29 21:48:02.645225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.924 [2024-09-29 21:48:02.838791] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.924 [2024-09-29 21:48:02.838846] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.184 BaseBdev1_malloc 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.184 [2024-09-29 21:48:03.118785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:44.184 [2024-09-29 21:48:03.118854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.184 [2024-09-29 21:48:03.118875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:44.184 [2024-09-29 21:48:03.118889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.184 [2024-09-29 21:48:03.120847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.184 [2024-09-29 21:48:03.120891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:44.184 BaseBdev1 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.184 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 BaseBdev2_malloc 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 [2024-09-29 21:48:03.201523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:44.444 [2024-09-29 21:48:03.201585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.444 [2024-09-29 21:48:03.201601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:44.444 [2024-09-29 21:48:03.201612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.444 [2024-09-29 21:48:03.203511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.444 [2024-09-29 21:48:03.203550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:44.444 BaseBdev2 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 BaseBdev3_malloc 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 [2024-09-29 21:48:03.256912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:44.444 [2024-09-29 21:48:03.256979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.444 [2024-09-29 21:48:03.256997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:44.444 [2024-09-29 21:48:03.257008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.444 [2024-09-29 21:48:03.258843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.444 [2024-09-29 21:48:03.258881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:44.444 BaseBdev3 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 BaseBdev4_malloc 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 [2024-09-29 21:48:03.313502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:44.444 [2024-09-29 21:48:03.313554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.444 [2024-09-29 21:48:03.313572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:44.444 [2024-09-29 21:48:03.313582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.444 BaseBdev4 00:16:44.444 [2024-09-29 21:48:03.315567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.444 [2024-09-29 21:48:03.315605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 spare_malloc 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 spare_delay 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 [2024-09-29 21:48:03.375207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:44.444 [2024-09-29 21:48:03.375279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.444 [2024-09-29 21:48:03.375298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:44.444 [2024-09-29 21:48:03.375308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.444 [2024-09-29 21:48:03.377233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.444 [2024-09-29 21:48:03.377272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:44.444 spare 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.444 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.444 [2024-09-29 21:48:03.387250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.444 [2024-09-29 21:48:03.388906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.444 [2024-09-29 21:48:03.388985] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.444 [2024-09-29 21:48:03.389032] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.444 [2024-09-29 21:48:03.389210] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:44.444 [2024-09-29 21:48:03.389229] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.444 [2024-09-29 21:48:03.389469] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:44.444 [2024-09-29 21:48:03.395535] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:44.445 [2024-09-29 21:48:03.395559] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:44.445 [2024-09-29 21:48:03.395713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.445 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.705 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.705 "name": "raid_bdev1", 00:16:44.705 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:44.705 "strip_size_kb": 64, 00:16:44.705 "state": "online", 00:16:44.705 "raid_level": "raid5f", 00:16:44.705 "superblock": true, 00:16:44.705 "num_base_bdevs": 4, 00:16:44.705 "num_base_bdevs_discovered": 4, 00:16:44.705 "num_base_bdevs_operational": 4, 00:16:44.705 "base_bdevs_list": [ 00:16:44.705 { 00:16:44.705 "name": "BaseBdev1", 00:16:44.705 "uuid": "4a968b3a-5cb8-50b9-8476-f69a1171b50b", 00:16:44.705 "is_configured": true, 00:16:44.705 "data_offset": 2048, 00:16:44.705 "data_size": 63488 00:16:44.705 }, 00:16:44.705 { 00:16:44.705 "name": "BaseBdev2", 00:16:44.705 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:44.705 "is_configured": true, 00:16:44.705 "data_offset": 2048, 00:16:44.705 "data_size": 63488 00:16:44.705 }, 00:16:44.705 { 00:16:44.705 "name": "BaseBdev3", 00:16:44.705 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:44.705 "is_configured": true, 00:16:44.705 "data_offset": 2048, 00:16:44.705 "data_size": 63488 00:16:44.705 }, 00:16:44.705 { 00:16:44.705 "name": "BaseBdev4", 00:16:44.705 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:44.705 "is_configured": true, 00:16:44.705 "data_offset": 2048, 00:16:44.705 "data_size": 63488 00:16:44.705 } 00:16:44.705 ] 00:16:44.705 }' 00:16:44.705 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.705 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:44.965 [2024-09-29 21:48:03.874608] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.965 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.225 21:48:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:45.225 [2024-09-29 21:48:04.130082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:45.225 /dev/nbd0 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:45.225 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:45.225 1+0 records in 00:16:45.225 1+0 records out 00:16:45.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342191 s, 12.0 MB/s 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:45.484 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:45.744 496+0 records in 00:16:45.744 496+0 records out 00:16:45.744 97517568 bytes (98 MB, 93 MiB) copied, 0.492884 s, 198 MB/s 00:16:45.744 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:45.744 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.744 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:45.744 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:45.744 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:45.744 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.744 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.003 [2024-09-29 21:48:04.927777] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.003 [2024-09-29 21:48:04.940421] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.003 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.263 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.263 "name": "raid_bdev1", 00:16:46.263 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:46.263 "strip_size_kb": 64, 00:16:46.263 "state": "online", 00:16:46.263 "raid_level": "raid5f", 00:16:46.263 "superblock": true, 00:16:46.263 "num_base_bdevs": 4, 00:16:46.263 "num_base_bdevs_discovered": 3, 00:16:46.263 "num_base_bdevs_operational": 3, 00:16:46.263 "base_bdevs_list": [ 00:16:46.263 { 00:16:46.263 "name": null, 00:16:46.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.263 "is_configured": false, 00:16:46.263 "data_offset": 0, 00:16:46.263 "data_size": 63488 00:16:46.263 }, 00:16:46.263 { 00:16:46.263 "name": "BaseBdev2", 00:16:46.263 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:46.263 "is_configured": true, 00:16:46.263 "data_offset": 2048, 00:16:46.263 "data_size": 63488 00:16:46.263 }, 00:16:46.263 { 00:16:46.263 "name": "BaseBdev3", 00:16:46.263 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:46.263 "is_configured": true, 00:16:46.263 "data_offset": 2048, 00:16:46.263 "data_size": 63488 00:16:46.263 }, 00:16:46.263 { 00:16:46.263 "name": "BaseBdev4", 00:16:46.263 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:46.263 "is_configured": true, 00:16:46.263 "data_offset": 2048, 00:16:46.263 "data_size": 63488 00:16:46.263 } 00:16:46.263 ] 00:16:46.263 }' 00:16:46.263 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.263 21:48:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.522 21:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.522 21:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.522 21:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.522 [2024-09-29 21:48:05.383637] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.523 [2024-09-29 21:48:05.397097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:46.523 21:48:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.523 21:48:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:46.523 [2024-09-29 21:48:05.406192] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.462 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.722 "name": "raid_bdev1", 00:16:47.722 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:47.722 "strip_size_kb": 64, 00:16:47.722 "state": "online", 00:16:47.722 "raid_level": "raid5f", 00:16:47.722 "superblock": true, 00:16:47.722 "num_base_bdevs": 4, 00:16:47.722 "num_base_bdevs_discovered": 4, 00:16:47.722 "num_base_bdevs_operational": 4, 00:16:47.722 "process": { 00:16:47.722 "type": "rebuild", 00:16:47.722 "target": "spare", 00:16:47.722 "progress": { 00:16:47.722 "blocks": 19200, 00:16:47.722 "percent": 10 00:16:47.722 } 00:16:47.722 }, 00:16:47.722 "base_bdevs_list": [ 00:16:47.722 { 00:16:47.722 "name": "spare", 00:16:47.722 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:47.722 "is_configured": true, 00:16:47.722 "data_offset": 2048, 00:16:47.722 "data_size": 63488 00:16:47.722 }, 00:16:47.722 { 00:16:47.722 "name": "BaseBdev2", 00:16:47.722 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:47.722 "is_configured": true, 00:16:47.722 "data_offset": 2048, 00:16:47.722 "data_size": 63488 00:16:47.722 }, 00:16:47.722 { 00:16:47.722 "name": "BaseBdev3", 00:16:47.722 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:47.722 "is_configured": true, 00:16:47.722 "data_offset": 2048, 00:16:47.722 "data_size": 63488 00:16:47.722 }, 00:16:47.722 { 00:16:47.722 "name": "BaseBdev4", 00:16:47.722 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:47.722 "is_configured": true, 00:16:47.722 "data_offset": 2048, 00:16:47.722 "data_size": 63488 00:16:47.722 } 00:16:47.722 ] 00:16:47.722 }' 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.722 [2024-09-29 21:48:06.532722] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.722 [2024-09-29 21:48:06.611393] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.722 [2024-09-29 21:48:06.611453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.722 [2024-09-29 21:48:06.611468] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.722 [2024-09-29 21:48:06.611480] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.722 "name": "raid_bdev1", 00:16:47.722 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:47.722 "strip_size_kb": 64, 00:16:47.722 "state": "online", 00:16:47.722 "raid_level": "raid5f", 00:16:47.722 "superblock": true, 00:16:47.722 "num_base_bdevs": 4, 00:16:47.722 "num_base_bdevs_discovered": 3, 00:16:47.722 "num_base_bdevs_operational": 3, 00:16:47.722 "base_bdevs_list": [ 00:16:47.722 { 00:16:47.722 "name": null, 00:16:47.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.722 "is_configured": false, 00:16:47.722 "data_offset": 0, 00:16:47.722 "data_size": 63488 00:16:47.722 }, 00:16:47.722 { 00:16:47.722 "name": "BaseBdev2", 00:16:47.722 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:47.722 "is_configured": true, 00:16:47.722 "data_offset": 2048, 00:16:47.722 "data_size": 63488 00:16:47.722 }, 00:16:47.722 { 00:16:47.722 "name": "BaseBdev3", 00:16:47.722 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:47.722 "is_configured": true, 00:16:47.722 "data_offset": 2048, 00:16:47.722 "data_size": 63488 00:16:47.722 }, 00:16:47.722 { 00:16:47.722 "name": "BaseBdev4", 00:16:47.722 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:47.722 "is_configured": true, 00:16:47.722 "data_offset": 2048, 00:16:47.722 "data_size": 63488 00:16:47.722 } 00:16:47.722 ] 00:16:47.722 }' 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.722 21:48:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.292 "name": "raid_bdev1", 00:16:48.292 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:48.292 "strip_size_kb": 64, 00:16:48.292 "state": "online", 00:16:48.292 "raid_level": "raid5f", 00:16:48.292 "superblock": true, 00:16:48.292 "num_base_bdevs": 4, 00:16:48.292 "num_base_bdevs_discovered": 3, 00:16:48.292 "num_base_bdevs_operational": 3, 00:16:48.292 "base_bdevs_list": [ 00:16:48.292 { 00:16:48.292 "name": null, 00:16:48.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.292 "is_configured": false, 00:16:48.292 "data_offset": 0, 00:16:48.292 "data_size": 63488 00:16:48.292 }, 00:16:48.292 { 00:16:48.292 "name": "BaseBdev2", 00:16:48.292 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:48.292 "is_configured": true, 00:16:48.292 "data_offset": 2048, 00:16:48.292 "data_size": 63488 00:16:48.292 }, 00:16:48.292 { 00:16:48.292 "name": "BaseBdev3", 00:16:48.292 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:48.292 "is_configured": true, 00:16:48.292 "data_offset": 2048, 00:16:48.292 "data_size": 63488 00:16:48.292 }, 00:16:48.292 { 00:16:48.292 "name": "BaseBdev4", 00:16:48.292 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:48.292 "is_configured": true, 00:16:48.292 "data_offset": 2048, 00:16:48.292 "data_size": 63488 00:16:48.292 } 00:16:48.292 ] 00:16:48.292 }' 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.292 [2024-09-29 21:48:07.222143] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.292 [2024-09-29 21:48:07.235781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.292 21:48:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:48.292 [2024-09-29 21:48:07.244259] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.673 "name": "raid_bdev1", 00:16:49.673 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:49.673 "strip_size_kb": 64, 00:16:49.673 "state": "online", 00:16:49.673 "raid_level": "raid5f", 00:16:49.673 "superblock": true, 00:16:49.673 "num_base_bdevs": 4, 00:16:49.673 "num_base_bdevs_discovered": 4, 00:16:49.673 "num_base_bdevs_operational": 4, 00:16:49.673 "process": { 00:16:49.673 "type": "rebuild", 00:16:49.673 "target": "spare", 00:16:49.673 "progress": { 00:16:49.673 "blocks": 19200, 00:16:49.673 "percent": 10 00:16:49.673 } 00:16:49.673 }, 00:16:49.673 "base_bdevs_list": [ 00:16:49.673 { 00:16:49.673 "name": "spare", 00:16:49.673 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:49.673 "is_configured": true, 00:16:49.673 "data_offset": 2048, 00:16:49.673 "data_size": 63488 00:16:49.673 }, 00:16:49.673 { 00:16:49.673 "name": "BaseBdev2", 00:16:49.673 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:49.673 "is_configured": true, 00:16:49.673 "data_offset": 2048, 00:16:49.673 "data_size": 63488 00:16:49.673 }, 00:16:49.673 { 00:16:49.673 "name": "BaseBdev3", 00:16:49.673 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:49.673 "is_configured": true, 00:16:49.673 "data_offset": 2048, 00:16:49.673 "data_size": 63488 00:16:49.673 }, 00:16:49.673 { 00:16:49.673 "name": "BaseBdev4", 00:16:49.673 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:49.673 "is_configured": true, 00:16:49.673 "data_offset": 2048, 00:16:49.673 "data_size": 63488 00:16:49.673 } 00:16:49.673 ] 00:16:49.673 }' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:49.673 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=644 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.673 "name": "raid_bdev1", 00:16:49.673 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:49.673 "strip_size_kb": 64, 00:16:49.673 "state": "online", 00:16:49.673 "raid_level": "raid5f", 00:16:49.673 "superblock": true, 00:16:49.673 "num_base_bdevs": 4, 00:16:49.673 "num_base_bdevs_discovered": 4, 00:16:49.673 "num_base_bdevs_operational": 4, 00:16:49.673 "process": { 00:16:49.673 "type": "rebuild", 00:16:49.673 "target": "spare", 00:16:49.673 "progress": { 00:16:49.673 "blocks": 21120, 00:16:49.673 "percent": 11 00:16:49.673 } 00:16:49.673 }, 00:16:49.673 "base_bdevs_list": [ 00:16:49.673 { 00:16:49.673 "name": "spare", 00:16:49.673 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:49.673 "is_configured": true, 00:16:49.673 "data_offset": 2048, 00:16:49.673 "data_size": 63488 00:16:49.673 }, 00:16:49.673 { 00:16:49.673 "name": "BaseBdev2", 00:16:49.673 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:49.673 "is_configured": true, 00:16:49.673 "data_offset": 2048, 00:16:49.673 "data_size": 63488 00:16:49.673 }, 00:16:49.673 { 00:16:49.673 "name": "BaseBdev3", 00:16:49.673 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:49.673 "is_configured": true, 00:16:49.673 "data_offset": 2048, 00:16:49.673 "data_size": 63488 00:16:49.673 }, 00:16:49.673 { 00:16:49.673 "name": "BaseBdev4", 00:16:49.673 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:49.673 "is_configured": true, 00:16:49.673 "data_offset": 2048, 00:16:49.673 "data_size": 63488 00:16:49.673 } 00:16:49.673 ] 00:16:49.673 }' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.673 21:48:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.623 "name": "raid_bdev1", 00:16:50.623 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:50.623 "strip_size_kb": 64, 00:16:50.623 "state": "online", 00:16:50.623 "raid_level": "raid5f", 00:16:50.623 "superblock": true, 00:16:50.623 "num_base_bdevs": 4, 00:16:50.623 "num_base_bdevs_discovered": 4, 00:16:50.623 "num_base_bdevs_operational": 4, 00:16:50.623 "process": { 00:16:50.623 "type": "rebuild", 00:16:50.623 "target": "spare", 00:16:50.623 "progress": { 00:16:50.623 "blocks": 42240, 00:16:50.623 "percent": 22 00:16:50.623 } 00:16:50.623 }, 00:16:50.623 "base_bdevs_list": [ 00:16:50.623 { 00:16:50.623 "name": "spare", 00:16:50.623 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:50.623 "is_configured": true, 00:16:50.623 "data_offset": 2048, 00:16:50.623 "data_size": 63488 00:16:50.623 }, 00:16:50.623 { 00:16:50.623 "name": "BaseBdev2", 00:16:50.623 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:50.623 "is_configured": true, 00:16:50.623 "data_offset": 2048, 00:16:50.623 "data_size": 63488 00:16:50.623 }, 00:16:50.623 { 00:16:50.623 "name": "BaseBdev3", 00:16:50.623 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:50.623 "is_configured": true, 00:16:50.623 "data_offset": 2048, 00:16:50.623 "data_size": 63488 00:16:50.623 }, 00:16:50.623 { 00:16:50.623 "name": "BaseBdev4", 00:16:50.623 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:50.623 "is_configured": true, 00:16:50.623 "data_offset": 2048, 00:16:50.623 "data_size": 63488 00:16:50.623 } 00:16:50.623 ] 00:16:50.623 }' 00:16:50.623 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.882 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.882 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.882 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.882 21:48:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.823 "name": "raid_bdev1", 00:16:51.823 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:51.823 "strip_size_kb": 64, 00:16:51.823 "state": "online", 00:16:51.823 "raid_level": "raid5f", 00:16:51.823 "superblock": true, 00:16:51.823 "num_base_bdevs": 4, 00:16:51.823 "num_base_bdevs_discovered": 4, 00:16:51.823 "num_base_bdevs_operational": 4, 00:16:51.823 "process": { 00:16:51.823 "type": "rebuild", 00:16:51.823 "target": "spare", 00:16:51.823 "progress": { 00:16:51.823 "blocks": 65280, 00:16:51.823 "percent": 34 00:16:51.823 } 00:16:51.823 }, 00:16:51.823 "base_bdevs_list": [ 00:16:51.823 { 00:16:51.823 "name": "spare", 00:16:51.823 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:51.823 "is_configured": true, 00:16:51.823 "data_offset": 2048, 00:16:51.823 "data_size": 63488 00:16:51.823 }, 00:16:51.823 { 00:16:51.823 "name": "BaseBdev2", 00:16:51.823 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:51.823 "is_configured": true, 00:16:51.823 "data_offset": 2048, 00:16:51.823 "data_size": 63488 00:16:51.823 }, 00:16:51.823 { 00:16:51.823 "name": "BaseBdev3", 00:16:51.823 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:51.823 "is_configured": true, 00:16:51.823 "data_offset": 2048, 00:16:51.823 "data_size": 63488 00:16:51.823 }, 00:16:51.823 { 00:16:51.823 "name": "BaseBdev4", 00:16:51.823 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:51.823 "is_configured": true, 00:16:51.823 "data_offset": 2048, 00:16:51.823 "data_size": 63488 00:16:51.823 } 00:16:51.823 ] 00:16:51.823 }' 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.823 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.083 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.083 21:48:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:53.022 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.023 "name": "raid_bdev1", 00:16:53.023 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:53.023 "strip_size_kb": 64, 00:16:53.023 "state": "online", 00:16:53.023 "raid_level": "raid5f", 00:16:53.023 "superblock": true, 00:16:53.023 "num_base_bdevs": 4, 00:16:53.023 "num_base_bdevs_discovered": 4, 00:16:53.023 "num_base_bdevs_operational": 4, 00:16:53.023 "process": { 00:16:53.023 "type": "rebuild", 00:16:53.023 "target": "spare", 00:16:53.023 "progress": { 00:16:53.023 "blocks": 86400, 00:16:53.023 "percent": 45 00:16:53.023 } 00:16:53.023 }, 00:16:53.023 "base_bdevs_list": [ 00:16:53.023 { 00:16:53.023 "name": "spare", 00:16:53.023 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:53.023 "is_configured": true, 00:16:53.023 "data_offset": 2048, 00:16:53.023 "data_size": 63488 00:16:53.023 }, 00:16:53.023 { 00:16:53.023 "name": "BaseBdev2", 00:16:53.023 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:53.023 "is_configured": true, 00:16:53.023 "data_offset": 2048, 00:16:53.023 "data_size": 63488 00:16:53.023 }, 00:16:53.023 { 00:16:53.023 "name": "BaseBdev3", 00:16:53.023 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:53.023 "is_configured": true, 00:16:53.023 "data_offset": 2048, 00:16:53.023 "data_size": 63488 00:16:53.023 }, 00:16:53.023 { 00:16:53.023 "name": "BaseBdev4", 00:16:53.023 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:53.023 "is_configured": true, 00:16:53.023 "data_offset": 2048, 00:16:53.023 "data_size": 63488 00:16:53.023 } 00:16:53.023 ] 00:16:53.023 }' 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.023 21:48:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.404 21:48:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.404 21:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.404 "name": "raid_bdev1", 00:16:54.404 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:54.404 "strip_size_kb": 64, 00:16:54.404 "state": "online", 00:16:54.404 "raid_level": "raid5f", 00:16:54.404 "superblock": true, 00:16:54.404 "num_base_bdevs": 4, 00:16:54.404 "num_base_bdevs_discovered": 4, 00:16:54.404 "num_base_bdevs_operational": 4, 00:16:54.404 "process": { 00:16:54.404 "type": "rebuild", 00:16:54.404 "target": "spare", 00:16:54.404 "progress": { 00:16:54.404 "blocks": 109440, 00:16:54.404 "percent": 57 00:16:54.404 } 00:16:54.404 }, 00:16:54.404 "base_bdevs_list": [ 00:16:54.404 { 00:16:54.404 "name": "spare", 00:16:54.404 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:54.404 "is_configured": true, 00:16:54.404 "data_offset": 2048, 00:16:54.404 "data_size": 63488 00:16:54.404 }, 00:16:54.404 { 00:16:54.404 "name": "BaseBdev2", 00:16:54.404 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:54.404 "is_configured": true, 00:16:54.404 "data_offset": 2048, 00:16:54.404 "data_size": 63488 00:16:54.404 }, 00:16:54.404 { 00:16:54.404 "name": "BaseBdev3", 00:16:54.404 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:54.404 "is_configured": true, 00:16:54.404 "data_offset": 2048, 00:16:54.404 "data_size": 63488 00:16:54.404 }, 00:16:54.404 { 00:16:54.404 "name": "BaseBdev4", 00:16:54.404 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:54.404 "is_configured": true, 00:16:54.404 "data_offset": 2048, 00:16:54.404 "data_size": 63488 00:16:54.404 } 00:16:54.404 ] 00:16:54.404 }' 00:16:54.405 21:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.405 21:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.405 21:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.405 21:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.405 21:48:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.355 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.355 "name": "raid_bdev1", 00:16:55.355 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:55.355 "strip_size_kb": 64, 00:16:55.355 "state": "online", 00:16:55.355 "raid_level": "raid5f", 00:16:55.355 "superblock": true, 00:16:55.355 "num_base_bdevs": 4, 00:16:55.355 "num_base_bdevs_discovered": 4, 00:16:55.355 "num_base_bdevs_operational": 4, 00:16:55.355 "process": { 00:16:55.355 "type": "rebuild", 00:16:55.355 "target": "spare", 00:16:55.355 "progress": { 00:16:55.355 "blocks": 130560, 00:16:55.355 "percent": 68 00:16:55.355 } 00:16:55.355 }, 00:16:55.355 "base_bdevs_list": [ 00:16:55.355 { 00:16:55.355 "name": "spare", 00:16:55.355 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:55.355 "is_configured": true, 00:16:55.355 "data_offset": 2048, 00:16:55.355 "data_size": 63488 00:16:55.355 }, 00:16:55.355 { 00:16:55.355 "name": "BaseBdev2", 00:16:55.355 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:55.355 "is_configured": true, 00:16:55.355 "data_offset": 2048, 00:16:55.355 "data_size": 63488 00:16:55.355 }, 00:16:55.355 { 00:16:55.355 "name": "BaseBdev3", 00:16:55.355 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:55.355 "is_configured": true, 00:16:55.355 "data_offset": 2048, 00:16:55.355 "data_size": 63488 00:16:55.355 }, 00:16:55.355 { 00:16:55.356 "name": "BaseBdev4", 00:16:55.356 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:55.356 "is_configured": true, 00:16:55.356 "data_offset": 2048, 00:16:55.356 "data_size": 63488 00:16:55.356 } 00:16:55.356 ] 00:16:55.356 }' 00:16:55.356 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.356 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.356 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.356 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.356 21:48:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.336 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.336 "name": "raid_bdev1", 00:16:56.336 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:56.336 "strip_size_kb": 64, 00:16:56.336 "state": "online", 00:16:56.336 "raid_level": "raid5f", 00:16:56.336 "superblock": true, 00:16:56.336 "num_base_bdevs": 4, 00:16:56.336 "num_base_bdevs_discovered": 4, 00:16:56.336 "num_base_bdevs_operational": 4, 00:16:56.336 "process": { 00:16:56.336 "type": "rebuild", 00:16:56.336 "target": "spare", 00:16:56.336 "progress": { 00:16:56.336 "blocks": 151680, 00:16:56.336 "percent": 79 00:16:56.336 } 00:16:56.336 }, 00:16:56.336 "base_bdevs_list": [ 00:16:56.336 { 00:16:56.336 "name": "spare", 00:16:56.336 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:56.336 "is_configured": true, 00:16:56.336 "data_offset": 2048, 00:16:56.336 "data_size": 63488 00:16:56.336 }, 00:16:56.336 { 00:16:56.336 "name": "BaseBdev2", 00:16:56.336 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:56.336 "is_configured": true, 00:16:56.336 "data_offset": 2048, 00:16:56.336 "data_size": 63488 00:16:56.336 }, 00:16:56.336 { 00:16:56.336 "name": "BaseBdev3", 00:16:56.336 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:56.336 "is_configured": true, 00:16:56.336 "data_offset": 2048, 00:16:56.336 "data_size": 63488 00:16:56.336 }, 00:16:56.336 { 00:16:56.336 "name": "BaseBdev4", 00:16:56.336 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:56.336 "is_configured": true, 00:16:56.336 "data_offset": 2048, 00:16:56.336 "data_size": 63488 00:16:56.336 } 00:16:56.336 ] 00:16:56.336 }' 00:16:56.337 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.597 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.597 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.597 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.597 21:48:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.535 "name": "raid_bdev1", 00:16:57.535 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:57.535 "strip_size_kb": 64, 00:16:57.535 "state": "online", 00:16:57.535 "raid_level": "raid5f", 00:16:57.535 "superblock": true, 00:16:57.535 "num_base_bdevs": 4, 00:16:57.535 "num_base_bdevs_discovered": 4, 00:16:57.535 "num_base_bdevs_operational": 4, 00:16:57.535 "process": { 00:16:57.535 "type": "rebuild", 00:16:57.535 "target": "spare", 00:16:57.535 "progress": { 00:16:57.535 "blocks": 174720, 00:16:57.535 "percent": 91 00:16:57.535 } 00:16:57.535 }, 00:16:57.535 "base_bdevs_list": [ 00:16:57.535 { 00:16:57.535 "name": "spare", 00:16:57.535 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:57.535 "is_configured": true, 00:16:57.535 "data_offset": 2048, 00:16:57.535 "data_size": 63488 00:16:57.535 }, 00:16:57.535 { 00:16:57.535 "name": "BaseBdev2", 00:16:57.535 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:57.535 "is_configured": true, 00:16:57.535 "data_offset": 2048, 00:16:57.535 "data_size": 63488 00:16:57.535 }, 00:16:57.535 { 00:16:57.535 "name": "BaseBdev3", 00:16:57.535 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:57.535 "is_configured": true, 00:16:57.535 "data_offset": 2048, 00:16:57.535 "data_size": 63488 00:16:57.535 }, 00:16:57.535 { 00:16:57.535 "name": "BaseBdev4", 00:16:57.535 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:57.535 "is_configured": true, 00:16:57.535 "data_offset": 2048, 00:16:57.535 "data_size": 63488 00:16:57.535 } 00:16:57.535 ] 00:16:57.535 }' 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.535 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.795 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.796 21:48:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.365 [2024-09-29 21:48:17.282083] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:58.366 [2024-09-29 21:48:17.282145] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:58.366 [2024-09-29 21:48:17.282247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.626 "name": "raid_bdev1", 00:16:58.626 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:58.626 "strip_size_kb": 64, 00:16:58.626 "state": "online", 00:16:58.626 "raid_level": "raid5f", 00:16:58.626 "superblock": true, 00:16:58.626 "num_base_bdevs": 4, 00:16:58.626 "num_base_bdevs_discovered": 4, 00:16:58.626 "num_base_bdevs_operational": 4, 00:16:58.626 "base_bdevs_list": [ 00:16:58.626 { 00:16:58.626 "name": "spare", 00:16:58.626 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:58.626 "is_configured": true, 00:16:58.626 "data_offset": 2048, 00:16:58.626 "data_size": 63488 00:16:58.626 }, 00:16:58.626 { 00:16:58.626 "name": "BaseBdev2", 00:16:58.626 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:58.626 "is_configured": true, 00:16:58.626 "data_offset": 2048, 00:16:58.626 "data_size": 63488 00:16:58.626 }, 00:16:58.626 { 00:16:58.626 "name": "BaseBdev3", 00:16:58.626 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:58.626 "is_configured": true, 00:16:58.626 "data_offset": 2048, 00:16:58.626 "data_size": 63488 00:16:58.626 }, 00:16:58.626 { 00:16:58.626 "name": "BaseBdev4", 00:16:58.626 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:58.626 "is_configured": true, 00:16:58.626 "data_offset": 2048, 00:16:58.626 "data_size": 63488 00:16:58.626 } 00:16:58.626 ] 00:16:58.626 }' 00:16:58.626 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.886 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:58.886 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.886 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:58.886 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:58.886 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:58.886 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.887 "name": "raid_bdev1", 00:16:58.887 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:58.887 "strip_size_kb": 64, 00:16:58.887 "state": "online", 00:16:58.887 "raid_level": "raid5f", 00:16:58.887 "superblock": true, 00:16:58.887 "num_base_bdevs": 4, 00:16:58.887 "num_base_bdevs_discovered": 4, 00:16:58.887 "num_base_bdevs_operational": 4, 00:16:58.887 "base_bdevs_list": [ 00:16:58.887 { 00:16:58.887 "name": "spare", 00:16:58.887 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:58.887 "is_configured": true, 00:16:58.887 "data_offset": 2048, 00:16:58.887 "data_size": 63488 00:16:58.887 }, 00:16:58.887 { 00:16:58.887 "name": "BaseBdev2", 00:16:58.887 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:58.887 "is_configured": true, 00:16:58.887 "data_offset": 2048, 00:16:58.887 "data_size": 63488 00:16:58.887 }, 00:16:58.887 { 00:16:58.887 "name": "BaseBdev3", 00:16:58.887 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:58.887 "is_configured": true, 00:16:58.887 "data_offset": 2048, 00:16:58.887 "data_size": 63488 00:16:58.887 }, 00:16:58.887 { 00:16:58.887 "name": "BaseBdev4", 00:16:58.887 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:58.887 "is_configured": true, 00:16:58.887 "data_offset": 2048, 00:16:58.887 "data_size": 63488 00:16:58.887 } 00:16:58.887 ] 00:16:58.887 }' 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.887 "name": "raid_bdev1", 00:16:58.887 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:16:58.887 "strip_size_kb": 64, 00:16:58.887 "state": "online", 00:16:58.887 "raid_level": "raid5f", 00:16:58.887 "superblock": true, 00:16:58.887 "num_base_bdevs": 4, 00:16:58.887 "num_base_bdevs_discovered": 4, 00:16:58.887 "num_base_bdevs_operational": 4, 00:16:58.887 "base_bdevs_list": [ 00:16:58.887 { 00:16:58.887 "name": "spare", 00:16:58.887 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:16:58.887 "is_configured": true, 00:16:58.887 "data_offset": 2048, 00:16:58.887 "data_size": 63488 00:16:58.887 }, 00:16:58.887 { 00:16:58.887 "name": "BaseBdev2", 00:16:58.887 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:16:58.887 "is_configured": true, 00:16:58.887 "data_offset": 2048, 00:16:58.887 "data_size": 63488 00:16:58.887 }, 00:16:58.887 { 00:16:58.887 "name": "BaseBdev3", 00:16:58.887 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:16:58.887 "is_configured": true, 00:16:58.887 "data_offset": 2048, 00:16:58.887 "data_size": 63488 00:16:58.887 }, 00:16:58.887 { 00:16:58.887 "name": "BaseBdev4", 00:16:58.887 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:16:58.887 "is_configured": true, 00:16:58.887 "data_offset": 2048, 00:16:58.887 "data_size": 63488 00:16:58.887 } 00:16:58.887 ] 00:16:58.887 }' 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.887 21:48:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.458 [2024-09-29 21:48:18.191992] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.458 [2024-09-29 21:48:18.192026] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.458 [2024-09-29 21:48:18.192102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.458 [2024-09-29 21:48:18.192194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.458 [2024-09-29 21:48:18.192217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:59.458 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:59.718 /dev/nbd0 00:16:59.718 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:59.719 1+0 records in 00:16:59.719 1+0 records out 00:16:59.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350532 s, 11.7 MB/s 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:59.719 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:59.980 /dev/nbd1 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:59.980 1+0 records in 00:16:59.980 1+0 records out 00:16:59.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329366 s, 12.4 MB/s 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:59.980 21:48:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:00.249 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.510 [2024-09-29 21:48:19.393787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:00.510 [2024-09-29 21:48:19.393855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.510 [2024-09-29 21:48:19.393876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:00.510 [2024-09-29 21:48:19.393885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.510 [2024-09-29 21:48:19.395992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.510 [2024-09-29 21:48:19.396040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:00.510 [2024-09-29 21:48:19.396131] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:00.510 [2024-09-29 21:48:19.396188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:00.510 [2024-09-29 21:48:19.396317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.510 [2024-09-29 21:48:19.396402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:00.510 [2024-09-29 21:48:19.396496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:00.510 spare 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.510 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 [2024-09-29 21:48:19.496390] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:00.770 [2024-09-29 21:48:19.496423] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:00.770 [2024-09-29 21:48:19.496693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:00.770 [2024-09-29 21:48:19.503854] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:00.770 [2024-09-29 21:48:19.503879] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:00.770 [2024-09-29 21:48:19.504089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.770 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.770 "name": "raid_bdev1", 00:17:00.770 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:00.770 "strip_size_kb": 64, 00:17:00.770 "state": "online", 00:17:00.770 "raid_level": "raid5f", 00:17:00.770 "superblock": true, 00:17:00.770 "num_base_bdevs": 4, 00:17:00.770 "num_base_bdevs_discovered": 4, 00:17:00.770 "num_base_bdevs_operational": 4, 00:17:00.770 "base_bdevs_list": [ 00:17:00.770 { 00:17:00.770 "name": "spare", 00:17:00.770 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:17:00.770 "is_configured": true, 00:17:00.770 "data_offset": 2048, 00:17:00.770 "data_size": 63488 00:17:00.770 }, 00:17:00.770 { 00:17:00.770 "name": "BaseBdev2", 00:17:00.771 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:00.771 "is_configured": true, 00:17:00.771 "data_offset": 2048, 00:17:00.771 "data_size": 63488 00:17:00.771 }, 00:17:00.771 { 00:17:00.771 "name": "BaseBdev3", 00:17:00.771 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:00.771 "is_configured": true, 00:17:00.771 "data_offset": 2048, 00:17:00.771 "data_size": 63488 00:17:00.771 }, 00:17:00.771 { 00:17:00.771 "name": "BaseBdev4", 00:17:00.771 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:00.771 "is_configured": true, 00:17:00.771 "data_offset": 2048, 00:17:00.771 "data_size": 63488 00:17:00.771 } 00:17:00.771 ] 00:17:00.771 }' 00:17:00.771 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.771 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.031 "name": "raid_bdev1", 00:17:01.031 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:01.031 "strip_size_kb": 64, 00:17:01.031 "state": "online", 00:17:01.031 "raid_level": "raid5f", 00:17:01.031 "superblock": true, 00:17:01.031 "num_base_bdevs": 4, 00:17:01.031 "num_base_bdevs_discovered": 4, 00:17:01.031 "num_base_bdevs_operational": 4, 00:17:01.031 "base_bdevs_list": [ 00:17:01.031 { 00:17:01.031 "name": "spare", 00:17:01.031 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:17:01.031 "is_configured": true, 00:17:01.031 "data_offset": 2048, 00:17:01.031 "data_size": 63488 00:17:01.031 }, 00:17:01.031 { 00:17:01.031 "name": "BaseBdev2", 00:17:01.031 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:01.031 "is_configured": true, 00:17:01.031 "data_offset": 2048, 00:17:01.031 "data_size": 63488 00:17:01.031 }, 00:17:01.031 { 00:17:01.031 "name": "BaseBdev3", 00:17:01.031 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:01.031 "is_configured": true, 00:17:01.031 "data_offset": 2048, 00:17:01.031 "data_size": 63488 00:17:01.031 }, 00:17:01.031 { 00:17:01.031 "name": "BaseBdev4", 00:17:01.031 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:01.031 "is_configured": true, 00:17:01.031 "data_offset": 2048, 00:17:01.031 "data_size": 63488 00:17:01.031 } 00:17:01.031 ] 00:17:01.031 }' 00:17:01.031 21:48:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.291 [2024-09-29 21:48:20.115090] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.291 "name": "raid_bdev1", 00:17:01.291 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:01.291 "strip_size_kb": 64, 00:17:01.291 "state": "online", 00:17:01.291 "raid_level": "raid5f", 00:17:01.291 "superblock": true, 00:17:01.291 "num_base_bdevs": 4, 00:17:01.291 "num_base_bdevs_discovered": 3, 00:17:01.291 "num_base_bdevs_operational": 3, 00:17:01.291 "base_bdevs_list": [ 00:17:01.291 { 00:17:01.291 "name": null, 00:17:01.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.291 "is_configured": false, 00:17:01.291 "data_offset": 0, 00:17:01.291 "data_size": 63488 00:17:01.291 }, 00:17:01.291 { 00:17:01.291 "name": "BaseBdev2", 00:17:01.291 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:01.291 "is_configured": true, 00:17:01.291 "data_offset": 2048, 00:17:01.291 "data_size": 63488 00:17:01.291 }, 00:17:01.291 { 00:17:01.291 "name": "BaseBdev3", 00:17:01.291 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:01.291 "is_configured": true, 00:17:01.291 "data_offset": 2048, 00:17:01.291 "data_size": 63488 00:17:01.291 }, 00:17:01.291 { 00:17:01.291 "name": "BaseBdev4", 00:17:01.291 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:01.291 "is_configured": true, 00:17:01.291 "data_offset": 2048, 00:17:01.291 "data_size": 63488 00:17:01.291 } 00:17:01.291 ] 00:17:01.291 }' 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.291 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.862 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:01.862 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.862 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.862 [2024-09-29 21:48:20.554335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.862 [2024-09-29 21:48:20.554458] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:01.862 [2024-09-29 21:48:20.554479] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:01.862 [2024-09-29 21:48:20.554510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:01.862 [2024-09-29 21:48:20.567629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:01.862 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.862 21:48:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:01.862 [2024-09-29 21:48:20.576359] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.803 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.803 "name": "raid_bdev1", 00:17:02.803 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:02.803 "strip_size_kb": 64, 00:17:02.803 "state": "online", 00:17:02.803 "raid_level": "raid5f", 00:17:02.803 "superblock": true, 00:17:02.803 "num_base_bdevs": 4, 00:17:02.803 "num_base_bdevs_discovered": 4, 00:17:02.803 "num_base_bdevs_operational": 4, 00:17:02.803 "process": { 00:17:02.803 "type": "rebuild", 00:17:02.803 "target": "spare", 00:17:02.803 "progress": { 00:17:02.803 "blocks": 19200, 00:17:02.803 "percent": 10 00:17:02.803 } 00:17:02.803 }, 00:17:02.803 "base_bdevs_list": [ 00:17:02.803 { 00:17:02.803 "name": "spare", 00:17:02.803 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:17:02.803 "is_configured": true, 00:17:02.803 "data_offset": 2048, 00:17:02.803 "data_size": 63488 00:17:02.803 }, 00:17:02.803 { 00:17:02.803 "name": "BaseBdev2", 00:17:02.803 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:02.803 "is_configured": true, 00:17:02.803 "data_offset": 2048, 00:17:02.803 "data_size": 63488 00:17:02.803 }, 00:17:02.803 { 00:17:02.803 "name": "BaseBdev3", 00:17:02.803 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:02.803 "is_configured": true, 00:17:02.803 "data_offset": 2048, 00:17:02.803 "data_size": 63488 00:17:02.803 }, 00:17:02.803 { 00:17:02.803 "name": "BaseBdev4", 00:17:02.803 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:02.803 "is_configured": true, 00:17:02.803 "data_offset": 2048, 00:17:02.804 "data_size": 63488 00:17:02.804 } 00:17:02.804 ] 00:17:02.804 }' 00:17:02.804 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.804 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.804 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.804 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.804 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:02.804 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.804 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.804 [2024-09-29 21:48:21.686951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.804 [2024-09-29 21:48:21.781519] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:02.804 [2024-09-29 21:48:21.781577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.804 [2024-09-29 21:48:21.781591] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:02.804 [2024-09-29 21:48:21.781600] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:03.064 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.064 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.065 "name": "raid_bdev1", 00:17:03.065 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:03.065 "strip_size_kb": 64, 00:17:03.065 "state": "online", 00:17:03.065 "raid_level": "raid5f", 00:17:03.065 "superblock": true, 00:17:03.065 "num_base_bdevs": 4, 00:17:03.065 "num_base_bdevs_discovered": 3, 00:17:03.065 "num_base_bdevs_operational": 3, 00:17:03.065 "base_bdevs_list": [ 00:17:03.065 { 00:17:03.065 "name": null, 00:17:03.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.065 "is_configured": false, 00:17:03.065 "data_offset": 0, 00:17:03.065 "data_size": 63488 00:17:03.065 }, 00:17:03.065 { 00:17:03.065 "name": "BaseBdev2", 00:17:03.065 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:03.065 "is_configured": true, 00:17:03.065 "data_offset": 2048, 00:17:03.065 "data_size": 63488 00:17:03.065 }, 00:17:03.065 { 00:17:03.065 "name": "BaseBdev3", 00:17:03.065 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:03.065 "is_configured": true, 00:17:03.065 "data_offset": 2048, 00:17:03.065 "data_size": 63488 00:17:03.065 }, 00:17:03.065 { 00:17:03.065 "name": "BaseBdev4", 00:17:03.065 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:03.065 "is_configured": true, 00:17:03.065 "data_offset": 2048, 00:17:03.065 "data_size": 63488 00:17:03.065 } 00:17:03.065 ] 00:17:03.065 }' 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.065 21:48:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.325 21:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.325 21:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.325 21:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.325 [2024-09-29 21:48:22.288156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.325 [2024-09-29 21:48:22.288230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.325 [2024-09-29 21:48:22.288255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:03.325 [2024-09-29 21:48:22.288268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.325 [2024-09-29 21:48:22.288720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.325 [2024-09-29 21:48:22.288751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.325 [2024-09-29 21:48:22.288824] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:03.325 [2024-09-29 21:48:22.288843] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:03.325 [2024-09-29 21:48:22.288852] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:03.325 [2024-09-29 21:48:22.288877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.325 [2024-09-29 21:48:22.301649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:03.325 spare 00:17:03.325 21:48:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.325 21:48:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:03.586 [2024-09-29 21:48:22.309884] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.527 "name": "raid_bdev1", 00:17:04.527 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:04.527 "strip_size_kb": 64, 00:17:04.527 "state": "online", 00:17:04.527 "raid_level": "raid5f", 00:17:04.527 "superblock": true, 00:17:04.527 "num_base_bdevs": 4, 00:17:04.527 "num_base_bdevs_discovered": 4, 00:17:04.527 "num_base_bdevs_operational": 4, 00:17:04.527 "process": { 00:17:04.527 "type": "rebuild", 00:17:04.527 "target": "spare", 00:17:04.527 "progress": { 00:17:04.527 "blocks": 19200, 00:17:04.527 "percent": 10 00:17:04.527 } 00:17:04.527 }, 00:17:04.527 "base_bdevs_list": [ 00:17:04.527 { 00:17:04.527 "name": "spare", 00:17:04.527 "uuid": "6309bbf7-7e0e-5abc-9e11-ab5735cf1d9a", 00:17:04.527 "is_configured": true, 00:17:04.527 "data_offset": 2048, 00:17:04.527 "data_size": 63488 00:17:04.527 }, 00:17:04.527 { 00:17:04.527 "name": "BaseBdev2", 00:17:04.527 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:04.527 "is_configured": true, 00:17:04.527 "data_offset": 2048, 00:17:04.527 "data_size": 63488 00:17:04.527 }, 00:17:04.527 { 00:17:04.527 "name": "BaseBdev3", 00:17:04.527 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:04.527 "is_configured": true, 00:17:04.527 "data_offset": 2048, 00:17:04.527 "data_size": 63488 00:17:04.527 }, 00:17:04.527 { 00:17:04.527 "name": "BaseBdev4", 00:17:04.527 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:04.527 "is_configured": true, 00:17:04.527 "data_offset": 2048, 00:17:04.527 "data_size": 63488 00:17:04.527 } 00:17:04.527 ] 00:17:04.527 }' 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.527 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.527 [2024-09-29 21:48:23.468478] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.788 [2024-09-29 21:48:23.515067] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:04.788 [2024-09-29 21:48:23.515130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.788 [2024-09-29 21:48:23.515147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.788 [2024-09-29 21:48:23.515154] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.788 "name": "raid_bdev1", 00:17:04.788 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:04.788 "strip_size_kb": 64, 00:17:04.788 "state": "online", 00:17:04.788 "raid_level": "raid5f", 00:17:04.788 "superblock": true, 00:17:04.788 "num_base_bdevs": 4, 00:17:04.788 "num_base_bdevs_discovered": 3, 00:17:04.788 "num_base_bdevs_operational": 3, 00:17:04.788 "base_bdevs_list": [ 00:17:04.788 { 00:17:04.788 "name": null, 00:17:04.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.788 "is_configured": false, 00:17:04.788 "data_offset": 0, 00:17:04.788 "data_size": 63488 00:17:04.788 }, 00:17:04.788 { 00:17:04.788 "name": "BaseBdev2", 00:17:04.788 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:04.788 "is_configured": true, 00:17:04.788 "data_offset": 2048, 00:17:04.788 "data_size": 63488 00:17:04.788 }, 00:17:04.788 { 00:17:04.788 "name": "BaseBdev3", 00:17:04.788 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:04.788 "is_configured": true, 00:17:04.788 "data_offset": 2048, 00:17:04.788 "data_size": 63488 00:17:04.788 }, 00:17:04.788 { 00:17:04.788 "name": "BaseBdev4", 00:17:04.788 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:04.788 "is_configured": true, 00:17:04.788 "data_offset": 2048, 00:17:04.788 "data_size": 63488 00:17:04.788 } 00:17:04.788 ] 00:17:04.788 }' 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.788 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.048 "name": "raid_bdev1", 00:17:05.048 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:05.048 "strip_size_kb": 64, 00:17:05.048 "state": "online", 00:17:05.048 "raid_level": "raid5f", 00:17:05.048 "superblock": true, 00:17:05.048 "num_base_bdevs": 4, 00:17:05.048 "num_base_bdevs_discovered": 3, 00:17:05.048 "num_base_bdevs_operational": 3, 00:17:05.048 "base_bdevs_list": [ 00:17:05.048 { 00:17:05.048 "name": null, 00:17:05.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.048 "is_configured": false, 00:17:05.048 "data_offset": 0, 00:17:05.048 "data_size": 63488 00:17:05.048 }, 00:17:05.048 { 00:17:05.048 "name": "BaseBdev2", 00:17:05.048 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:05.048 "is_configured": true, 00:17:05.048 "data_offset": 2048, 00:17:05.048 "data_size": 63488 00:17:05.048 }, 00:17:05.048 { 00:17:05.048 "name": "BaseBdev3", 00:17:05.048 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:05.048 "is_configured": true, 00:17:05.048 "data_offset": 2048, 00:17:05.048 "data_size": 63488 00:17:05.048 }, 00:17:05.048 { 00:17:05.048 "name": "BaseBdev4", 00:17:05.048 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:05.048 "is_configured": true, 00:17:05.048 "data_offset": 2048, 00:17:05.048 "data_size": 63488 00:17:05.048 } 00:17:05.048 ] 00:17:05.048 }' 00:17:05.048 21:48:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.309 [2024-09-29 21:48:24.116984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:05.309 [2024-09-29 21:48:24.117057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.309 [2024-09-29 21:48:24.117077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:05.309 [2024-09-29 21:48:24.117087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.309 [2024-09-29 21:48:24.117511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.309 [2024-09-29 21:48:24.117538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:05.309 [2024-09-29 21:48:24.117604] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:05.309 [2024-09-29 21:48:24.117615] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:05.309 [2024-09-29 21:48:24.117626] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:05.309 [2024-09-29 21:48:24.117636] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:05.309 BaseBdev1 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.309 21:48:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.251 "name": "raid_bdev1", 00:17:06.251 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:06.251 "strip_size_kb": 64, 00:17:06.251 "state": "online", 00:17:06.251 "raid_level": "raid5f", 00:17:06.251 "superblock": true, 00:17:06.251 "num_base_bdevs": 4, 00:17:06.251 "num_base_bdevs_discovered": 3, 00:17:06.251 "num_base_bdevs_operational": 3, 00:17:06.251 "base_bdevs_list": [ 00:17:06.251 { 00:17:06.251 "name": null, 00:17:06.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.251 "is_configured": false, 00:17:06.251 "data_offset": 0, 00:17:06.251 "data_size": 63488 00:17:06.251 }, 00:17:06.251 { 00:17:06.251 "name": "BaseBdev2", 00:17:06.251 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:06.251 "is_configured": true, 00:17:06.251 "data_offset": 2048, 00:17:06.251 "data_size": 63488 00:17:06.251 }, 00:17:06.251 { 00:17:06.251 "name": "BaseBdev3", 00:17:06.251 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:06.251 "is_configured": true, 00:17:06.251 "data_offset": 2048, 00:17:06.251 "data_size": 63488 00:17:06.251 }, 00:17:06.251 { 00:17:06.251 "name": "BaseBdev4", 00:17:06.251 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:06.251 "is_configured": true, 00:17:06.251 "data_offset": 2048, 00:17:06.251 "data_size": 63488 00:17:06.251 } 00:17:06.251 ] 00:17:06.251 }' 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.251 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.821 "name": "raid_bdev1", 00:17:06.821 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:06.821 "strip_size_kb": 64, 00:17:06.821 "state": "online", 00:17:06.821 "raid_level": "raid5f", 00:17:06.821 "superblock": true, 00:17:06.821 "num_base_bdevs": 4, 00:17:06.821 "num_base_bdevs_discovered": 3, 00:17:06.821 "num_base_bdevs_operational": 3, 00:17:06.821 "base_bdevs_list": [ 00:17:06.821 { 00:17:06.821 "name": null, 00:17:06.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.821 "is_configured": false, 00:17:06.821 "data_offset": 0, 00:17:06.821 "data_size": 63488 00:17:06.821 }, 00:17:06.821 { 00:17:06.821 "name": "BaseBdev2", 00:17:06.821 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:06.821 "is_configured": true, 00:17:06.821 "data_offset": 2048, 00:17:06.821 "data_size": 63488 00:17:06.821 }, 00:17:06.821 { 00:17:06.821 "name": "BaseBdev3", 00:17:06.821 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:06.821 "is_configured": true, 00:17:06.821 "data_offset": 2048, 00:17:06.821 "data_size": 63488 00:17:06.821 }, 00:17:06.821 { 00:17:06.821 "name": "BaseBdev4", 00:17:06.821 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:06.821 "is_configured": true, 00:17:06.821 "data_offset": 2048, 00:17:06.821 "data_size": 63488 00:17:06.821 } 00:17:06.821 ] 00:17:06.821 }' 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.821 [2024-09-29 21:48:25.714269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.821 [2024-09-29 21:48:25.714406] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:06.821 [2024-09-29 21:48:25.714423] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:06.821 request: 00:17:06.821 { 00:17:06.821 "base_bdev": "BaseBdev1", 00:17:06.821 "raid_bdev": "raid_bdev1", 00:17:06.821 "method": "bdev_raid_add_base_bdev", 00:17:06.821 "req_id": 1 00:17:06.821 } 00:17:06.821 Got JSON-RPC error response 00:17:06.821 response: 00:17:06.821 { 00:17:06.821 "code": -22, 00:17:06.821 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:06.821 } 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:06.821 21:48:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.762 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.021 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.021 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.021 "name": "raid_bdev1", 00:17:08.021 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:08.021 "strip_size_kb": 64, 00:17:08.021 "state": "online", 00:17:08.021 "raid_level": "raid5f", 00:17:08.021 "superblock": true, 00:17:08.021 "num_base_bdevs": 4, 00:17:08.021 "num_base_bdevs_discovered": 3, 00:17:08.021 "num_base_bdevs_operational": 3, 00:17:08.021 "base_bdevs_list": [ 00:17:08.021 { 00:17:08.021 "name": null, 00:17:08.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.021 "is_configured": false, 00:17:08.021 "data_offset": 0, 00:17:08.021 "data_size": 63488 00:17:08.021 }, 00:17:08.021 { 00:17:08.021 "name": "BaseBdev2", 00:17:08.021 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:08.021 "is_configured": true, 00:17:08.021 "data_offset": 2048, 00:17:08.021 "data_size": 63488 00:17:08.021 }, 00:17:08.021 { 00:17:08.021 "name": "BaseBdev3", 00:17:08.021 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:08.021 "is_configured": true, 00:17:08.021 "data_offset": 2048, 00:17:08.021 "data_size": 63488 00:17:08.021 }, 00:17:08.021 { 00:17:08.021 "name": "BaseBdev4", 00:17:08.021 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:08.021 "is_configured": true, 00:17:08.021 "data_offset": 2048, 00:17:08.021 "data_size": 63488 00:17:08.021 } 00:17:08.021 ] 00:17:08.021 }' 00:17:08.021 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.021 21:48:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.281 "name": "raid_bdev1", 00:17:08.281 "uuid": "87712cb6-0832-44f5-8820-cf9ca823a607", 00:17:08.281 "strip_size_kb": 64, 00:17:08.281 "state": "online", 00:17:08.281 "raid_level": "raid5f", 00:17:08.281 "superblock": true, 00:17:08.281 "num_base_bdevs": 4, 00:17:08.281 "num_base_bdevs_discovered": 3, 00:17:08.281 "num_base_bdevs_operational": 3, 00:17:08.281 "base_bdevs_list": [ 00:17:08.281 { 00:17:08.281 "name": null, 00:17:08.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.281 "is_configured": false, 00:17:08.281 "data_offset": 0, 00:17:08.281 "data_size": 63488 00:17:08.281 }, 00:17:08.281 { 00:17:08.281 "name": "BaseBdev2", 00:17:08.281 "uuid": "52417c0b-ec1d-5f7f-8408-93b3815e2565", 00:17:08.281 "is_configured": true, 00:17:08.281 "data_offset": 2048, 00:17:08.281 "data_size": 63488 00:17:08.281 }, 00:17:08.281 { 00:17:08.281 "name": "BaseBdev3", 00:17:08.281 "uuid": "00d3cdf8-c9d8-5871-98e8-e765b34481f1", 00:17:08.281 "is_configured": true, 00:17:08.281 "data_offset": 2048, 00:17:08.281 "data_size": 63488 00:17:08.281 }, 00:17:08.281 { 00:17:08.281 "name": "BaseBdev4", 00:17:08.281 "uuid": "72dfae66-403b-51be-b266-1f24f7a46821", 00:17:08.281 "is_configured": true, 00:17:08.281 "data_offset": 2048, 00:17:08.281 "data_size": 63488 00:17:08.281 } 00:17:08.281 ] 00:17:08.281 }' 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.281 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85164 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85164 ']' 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85164 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85164 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:08.541 killing process with pid 85164 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85164' 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85164 00:17:08.541 Received shutdown signal, test time was about 60.000000 seconds 00:17:08.541 00:17:08.541 Latency(us) 00:17:08.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.541 =================================================================================================================== 00:17:08.541 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:08.541 [2024-09-29 21:48:27.343347] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.541 [2024-09-29 21:48:27.343451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.541 21:48:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85164 00:17:08.541 [2024-09-29 21:48:27.343521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.541 [2024-09-29 21:48:27.343533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:09.110 [2024-09-29 21:48:27.795220] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.048 21:48:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:10.048 00:17:10.048 real 0m26.775s 00:17:10.048 user 0m33.442s 00:17:10.048 sys 0m3.074s 00:17:10.048 21:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:10.048 21:48:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.048 ************************************ 00:17:10.048 END TEST raid5f_rebuild_test_sb 00:17:10.048 ************************************ 00:17:10.048 21:48:29 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:10.048 21:48:29 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:10.048 21:48:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:10.048 21:48:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:10.048 21:48:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.048 ************************************ 00:17:10.048 START TEST raid_state_function_test_sb_4k 00:17:10.048 ************************************ 00:17:10.048 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:10.048 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:10.048 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:10.048 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:10.048 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:10.308 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:10.308 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:10.308 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85976 00:17:10.309 Process raid pid: 85976 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85976' 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85976 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 85976 ']' 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:10.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:10.309 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.309 [2024-09-29 21:48:29.131588] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:10.309 [2024-09-29 21:48:29.131697] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.568 [2024-09-29 21:48:29.296553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.568 [2024-09-29 21:48:29.487883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.828 [2024-09-29 21:48:29.686308] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.828 [2024-09-29 21:48:29.686345] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.088 [2024-09-29 21:48:29.941375] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:11.088 [2024-09-29 21:48:29.941426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:11.088 [2024-09-29 21:48:29.941436] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.088 [2024-09-29 21:48:29.941446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.088 "name": "Existed_Raid", 00:17:11.088 "uuid": "b19c9f53-cbf8-4e5d-82a3-90fc60bc87a2", 00:17:11.088 "strip_size_kb": 0, 00:17:11.088 "state": "configuring", 00:17:11.088 "raid_level": "raid1", 00:17:11.088 "superblock": true, 00:17:11.088 "num_base_bdevs": 2, 00:17:11.088 "num_base_bdevs_discovered": 0, 00:17:11.088 "num_base_bdevs_operational": 2, 00:17:11.088 "base_bdevs_list": [ 00:17:11.088 { 00:17:11.088 "name": "BaseBdev1", 00:17:11.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.088 "is_configured": false, 00:17:11.088 "data_offset": 0, 00:17:11.088 "data_size": 0 00:17:11.088 }, 00:17:11.088 { 00:17:11.088 "name": "BaseBdev2", 00:17:11.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.088 "is_configured": false, 00:17:11.088 "data_offset": 0, 00:17:11.088 "data_size": 0 00:17:11.088 } 00:17:11.088 ] 00:17:11.088 }' 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.088 21:48:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.656 [2024-09-29 21:48:30.364548] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:11.656 [2024-09-29 21:48:30.364584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.656 [2024-09-29 21:48:30.376557] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:11.656 [2024-09-29 21:48:30.376595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:11.656 [2024-09-29 21:48:30.376603] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.656 [2024-09-29 21:48:30.376614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.656 [2024-09-29 21:48:30.459490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.656 BaseBdev1 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.656 [ 00:17:11.656 { 00:17:11.656 "name": "BaseBdev1", 00:17:11.656 "aliases": [ 00:17:11.656 "d4617c95-845b-48b4-9c2e-95a608197c58" 00:17:11.656 ], 00:17:11.656 "product_name": "Malloc disk", 00:17:11.656 "block_size": 4096, 00:17:11.656 "num_blocks": 8192, 00:17:11.656 "uuid": "d4617c95-845b-48b4-9c2e-95a608197c58", 00:17:11.656 "assigned_rate_limits": { 00:17:11.656 "rw_ios_per_sec": 0, 00:17:11.656 "rw_mbytes_per_sec": 0, 00:17:11.656 "r_mbytes_per_sec": 0, 00:17:11.656 "w_mbytes_per_sec": 0 00:17:11.656 }, 00:17:11.656 "claimed": true, 00:17:11.656 "claim_type": "exclusive_write", 00:17:11.656 "zoned": false, 00:17:11.656 "supported_io_types": { 00:17:11.656 "read": true, 00:17:11.656 "write": true, 00:17:11.656 "unmap": true, 00:17:11.656 "flush": true, 00:17:11.656 "reset": true, 00:17:11.656 "nvme_admin": false, 00:17:11.656 "nvme_io": false, 00:17:11.656 "nvme_io_md": false, 00:17:11.656 "write_zeroes": true, 00:17:11.656 "zcopy": true, 00:17:11.656 "get_zone_info": false, 00:17:11.656 "zone_management": false, 00:17:11.656 "zone_append": false, 00:17:11.656 "compare": false, 00:17:11.656 "compare_and_write": false, 00:17:11.656 "abort": true, 00:17:11.656 "seek_hole": false, 00:17:11.656 "seek_data": false, 00:17:11.656 "copy": true, 00:17:11.656 "nvme_iov_md": false 00:17:11.656 }, 00:17:11.656 "memory_domains": [ 00:17:11.656 { 00:17:11.656 "dma_device_id": "system", 00:17:11.656 "dma_device_type": 1 00:17:11.656 }, 00:17:11.656 { 00:17:11.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.656 "dma_device_type": 2 00:17:11.656 } 00:17:11.656 ], 00:17:11.656 "driver_specific": {} 00:17:11.656 } 00:17:11.656 ] 00:17:11.656 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.657 "name": "Existed_Raid", 00:17:11.657 "uuid": "4a4cf63a-f753-4012-a82e-c08ef5a2877d", 00:17:11.657 "strip_size_kb": 0, 00:17:11.657 "state": "configuring", 00:17:11.657 "raid_level": "raid1", 00:17:11.657 "superblock": true, 00:17:11.657 "num_base_bdevs": 2, 00:17:11.657 "num_base_bdevs_discovered": 1, 00:17:11.657 "num_base_bdevs_operational": 2, 00:17:11.657 "base_bdevs_list": [ 00:17:11.657 { 00:17:11.657 "name": "BaseBdev1", 00:17:11.657 "uuid": "d4617c95-845b-48b4-9c2e-95a608197c58", 00:17:11.657 "is_configured": true, 00:17:11.657 "data_offset": 256, 00:17:11.657 "data_size": 7936 00:17:11.657 }, 00:17:11.657 { 00:17:11.657 "name": "BaseBdev2", 00:17:11.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.657 "is_configured": false, 00:17:11.657 "data_offset": 0, 00:17:11.657 "data_size": 0 00:17:11.657 } 00:17:11.657 ] 00:17:11.657 }' 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.657 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.225 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:12.225 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.225 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.226 [2024-09-29 21:48:30.978618] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:12.226 [2024-09-29 21:48:30.978656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.226 [2024-09-29 21:48:30.990632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.226 [2024-09-29 21:48:30.992255] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.226 [2024-09-29 21:48:30.992297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.226 21:48:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.226 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.226 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.226 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.226 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.226 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.226 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.226 "name": "Existed_Raid", 00:17:12.226 "uuid": "870de1bc-b575-46e1-a53f-48857a23d885", 00:17:12.226 "strip_size_kb": 0, 00:17:12.226 "state": "configuring", 00:17:12.226 "raid_level": "raid1", 00:17:12.226 "superblock": true, 00:17:12.226 "num_base_bdevs": 2, 00:17:12.226 "num_base_bdevs_discovered": 1, 00:17:12.226 "num_base_bdevs_operational": 2, 00:17:12.226 "base_bdevs_list": [ 00:17:12.226 { 00:17:12.226 "name": "BaseBdev1", 00:17:12.226 "uuid": "d4617c95-845b-48b4-9c2e-95a608197c58", 00:17:12.226 "is_configured": true, 00:17:12.226 "data_offset": 256, 00:17:12.226 "data_size": 7936 00:17:12.226 }, 00:17:12.226 { 00:17:12.226 "name": "BaseBdev2", 00:17:12.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.226 "is_configured": false, 00:17:12.226 "data_offset": 0, 00:17:12.226 "data_size": 0 00:17:12.226 } 00:17:12.226 ] 00:17:12.226 }' 00:17:12.226 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.226 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.486 [2024-09-29 21:48:31.435863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.486 [2024-09-29 21:48:31.436099] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:12.486 [2024-09-29 21:48:31.436115] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:12.486 [2024-09-29 21:48:31.436427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:12.486 [2024-09-29 21:48:31.436600] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:12.486 [2024-09-29 21:48:31.436620] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:12.486 BaseBdev2 00:17:12.486 [2024-09-29 21:48:31.436765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.486 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.486 [ 00:17:12.486 { 00:17:12.486 "name": "BaseBdev2", 00:17:12.486 "aliases": [ 00:17:12.486 "1abe37f9-2199-4d01-bec7-b05d2b8c9fad" 00:17:12.486 ], 00:17:12.486 "product_name": "Malloc disk", 00:17:12.486 "block_size": 4096, 00:17:12.486 "num_blocks": 8192, 00:17:12.486 "uuid": "1abe37f9-2199-4d01-bec7-b05d2b8c9fad", 00:17:12.486 "assigned_rate_limits": { 00:17:12.486 "rw_ios_per_sec": 0, 00:17:12.486 "rw_mbytes_per_sec": 0, 00:17:12.486 "r_mbytes_per_sec": 0, 00:17:12.486 "w_mbytes_per_sec": 0 00:17:12.486 }, 00:17:12.486 "claimed": true, 00:17:12.486 "claim_type": "exclusive_write", 00:17:12.486 "zoned": false, 00:17:12.486 "supported_io_types": { 00:17:12.486 "read": true, 00:17:12.486 "write": true, 00:17:12.486 "unmap": true, 00:17:12.486 "flush": true, 00:17:12.486 "reset": true, 00:17:12.486 "nvme_admin": false, 00:17:12.486 "nvme_io": false, 00:17:12.486 "nvme_io_md": false, 00:17:12.486 "write_zeroes": true, 00:17:12.486 "zcopy": true, 00:17:12.486 "get_zone_info": false, 00:17:12.486 "zone_management": false, 00:17:12.486 "zone_append": false, 00:17:12.486 "compare": false, 00:17:12.486 "compare_and_write": false, 00:17:12.486 "abort": true, 00:17:12.486 "seek_hole": false, 00:17:12.486 "seek_data": false, 00:17:12.486 "copy": true, 00:17:12.486 "nvme_iov_md": false 00:17:12.486 }, 00:17:12.486 "memory_domains": [ 00:17:12.486 { 00:17:12.486 "dma_device_id": "system", 00:17:12.486 "dma_device_type": 1 00:17:12.486 }, 00:17:12.486 { 00:17:12.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.746 "dma_device_type": 2 00:17:12.746 } 00:17:12.746 ], 00:17:12.746 "driver_specific": {} 00:17:12.746 } 00:17:12.746 ] 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.746 "name": "Existed_Raid", 00:17:12.746 "uuid": "870de1bc-b575-46e1-a53f-48857a23d885", 00:17:12.746 "strip_size_kb": 0, 00:17:12.746 "state": "online", 00:17:12.746 "raid_level": "raid1", 00:17:12.746 "superblock": true, 00:17:12.746 "num_base_bdevs": 2, 00:17:12.746 "num_base_bdevs_discovered": 2, 00:17:12.746 "num_base_bdevs_operational": 2, 00:17:12.746 "base_bdevs_list": [ 00:17:12.746 { 00:17:12.746 "name": "BaseBdev1", 00:17:12.746 "uuid": "d4617c95-845b-48b4-9c2e-95a608197c58", 00:17:12.746 "is_configured": true, 00:17:12.746 "data_offset": 256, 00:17:12.746 "data_size": 7936 00:17:12.746 }, 00:17:12.746 { 00:17:12.746 "name": "BaseBdev2", 00:17:12.746 "uuid": "1abe37f9-2199-4d01-bec7-b05d2b8c9fad", 00:17:12.746 "is_configured": true, 00:17:12.746 "data_offset": 256, 00:17:12.746 "data_size": 7936 00:17:12.746 } 00:17:12.746 ] 00:17:12.746 }' 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.746 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.006 [2024-09-29 21:48:31.927289] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.006 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:13.006 "name": "Existed_Raid", 00:17:13.006 "aliases": [ 00:17:13.006 "870de1bc-b575-46e1-a53f-48857a23d885" 00:17:13.006 ], 00:17:13.006 "product_name": "Raid Volume", 00:17:13.006 "block_size": 4096, 00:17:13.006 "num_blocks": 7936, 00:17:13.006 "uuid": "870de1bc-b575-46e1-a53f-48857a23d885", 00:17:13.006 "assigned_rate_limits": { 00:17:13.006 "rw_ios_per_sec": 0, 00:17:13.006 "rw_mbytes_per_sec": 0, 00:17:13.006 "r_mbytes_per_sec": 0, 00:17:13.006 "w_mbytes_per_sec": 0 00:17:13.006 }, 00:17:13.006 "claimed": false, 00:17:13.006 "zoned": false, 00:17:13.006 "supported_io_types": { 00:17:13.006 "read": true, 00:17:13.006 "write": true, 00:17:13.006 "unmap": false, 00:17:13.006 "flush": false, 00:17:13.006 "reset": true, 00:17:13.006 "nvme_admin": false, 00:17:13.006 "nvme_io": false, 00:17:13.006 "nvme_io_md": false, 00:17:13.006 "write_zeroes": true, 00:17:13.006 "zcopy": false, 00:17:13.006 "get_zone_info": false, 00:17:13.007 "zone_management": false, 00:17:13.007 "zone_append": false, 00:17:13.007 "compare": false, 00:17:13.007 "compare_and_write": false, 00:17:13.007 "abort": false, 00:17:13.007 "seek_hole": false, 00:17:13.007 "seek_data": false, 00:17:13.007 "copy": false, 00:17:13.007 "nvme_iov_md": false 00:17:13.007 }, 00:17:13.007 "memory_domains": [ 00:17:13.007 { 00:17:13.007 "dma_device_id": "system", 00:17:13.007 "dma_device_type": 1 00:17:13.007 }, 00:17:13.007 { 00:17:13.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.007 "dma_device_type": 2 00:17:13.007 }, 00:17:13.007 { 00:17:13.007 "dma_device_id": "system", 00:17:13.007 "dma_device_type": 1 00:17:13.007 }, 00:17:13.007 { 00:17:13.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.007 "dma_device_type": 2 00:17:13.007 } 00:17:13.007 ], 00:17:13.007 "driver_specific": { 00:17:13.007 "raid": { 00:17:13.007 "uuid": "870de1bc-b575-46e1-a53f-48857a23d885", 00:17:13.007 "strip_size_kb": 0, 00:17:13.007 "state": "online", 00:17:13.007 "raid_level": "raid1", 00:17:13.007 "superblock": true, 00:17:13.007 "num_base_bdevs": 2, 00:17:13.007 "num_base_bdevs_discovered": 2, 00:17:13.007 "num_base_bdevs_operational": 2, 00:17:13.007 "base_bdevs_list": [ 00:17:13.007 { 00:17:13.007 "name": "BaseBdev1", 00:17:13.007 "uuid": "d4617c95-845b-48b4-9c2e-95a608197c58", 00:17:13.007 "is_configured": true, 00:17:13.007 "data_offset": 256, 00:17:13.007 "data_size": 7936 00:17:13.007 }, 00:17:13.007 { 00:17:13.007 "name": "BaseBdev2", 00:17:13.007 "uuid": "1abe37f9-2199-4d01-bec7-b05d2b8c9fad", 00:17:13.007 "is_configured": true, 00:17:13.007 "data_offset": 256, 00:17:13.007 "data_size": 7936 00:17:13.007 } 00:17:13.007 ] 00:17:13.007 } 00:17:13.007 } 00:17:13.007 }' 00:17:13.007 21:48:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:13.266 BaseBdev2' 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.266 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.267 [2024-09-29 21:48:32.146709] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.267 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.526 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.526 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.526 "name": "Existed_Raid", 00:17:13.526 "uuid": "870de1bc-b575-46e1-a53f-48857a23d885", 00:17:13.526 "strip_size_kb": 0, 00:17:13.526 "state": "online", 00:17:13.526 "raid_level": "raid1", 00:17:13.526 "superblock": true, 00:17:13.526 "num_base_bdevs": 2, 00:17:13.526 "num_base_bdevs_discovered": 1, 00:17:13.526 "num_base_bdevs_operational": 1, 00:17:13.526 "base_bdevs_list": [ 00:17:13.526 { 00:17:13.526 "name": null, 00:17:13.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.526 "is_configured": false, 00:17:13.526 "data_offset": 0, 00:17:13.526 "data_size": 7936 00:17:13.526 }, 00:17:13.526 { 00:17:13.526 "name": "BaseBdev2", 00:17:13.526 "uuid": "1abe37f9-2199-4d01-bec7-b05d2b8c9fad", 00:17:13.526 "is_configured": true, 00:17:13.526 "data_offset": 256, 00:17:13.526 "data_size": 7936 00:17:13.526 } 00:17:13.526 ] 00:17:13.526 }' 00:17:13.526 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.526 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.786 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:13.786 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:13.786 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.786 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.786 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.786 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:13.786 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.046 [2024-09-29 21:48:32.782693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:14.046 [2024-09-29 21:48:32.782797] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.046 [2024-09-29 21:48:32.871600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.046 [2024-09-29 21:48:32.871650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:14.046 [2024-09-29 21:48:32.871660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85976 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 85976 ']' 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 85976 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85976 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:14.046 killing process with pid 85976 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85976' 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 85976 00:17:14.046 [2024-09-29 21:48:32.969922] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.046 21:48:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 85976 00:17:14.046 [2024-09-29 21:48:32.985097] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:15.429 21:48:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:15.429 00:17:15.429 real 0m5.156s 00:17:15.429 user 0m7.355s 00:17:15.429 sys 0m0.919s 00:17:15.429 21:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:15.429 21:48:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.429 ************************************ 00:17:15.429 END TEST raid_state_function_test_sb_4k 00:17:15.429 ************************************ 00:17:15.429 21:48:34 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:15.429 21:48:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:15.429 21:48:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:15.429 21:48:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:15.429 ************************************ 00:17:15.429 START TEST raid_superblock_test_4k 00:17:15.429 ************************************ 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86223 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86223 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86223 ']' 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:15.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:15.430 21:48:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:15.430 [2024-09-29 21:48:34.370464] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:15.430 [2024-09-29 21:48:34.370588] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86223 ] 00:17:15.690 [2024-09-29 21:48:34.519432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.949 [2024-09-29 21:48:34.716189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.949 [2024-09-29 21:48:34.892926] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:15.949 [2024-09-29 21:48:34.892980] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.209 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.470 malloc1 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.470 [2024-09-29 21:48:35.206684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:16.470 [2024-09-29 21:48:35.206745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.470 [2024-09-29 21:48:35.206766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:16.470 [2024-09-29 21:48:35.206776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.470 [2024-09-29 21:48:35.208689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.470 [2024-09-29 21:48:35.208726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:16.470 pt1 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.470 malloc2 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.470 [2024-09-29 21:48:35.269239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.470 [2024-09-29 21:48:35.269289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.470 [2024-09-29 21:48:35.269309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:16.470 [2024-09-29 21:48:35.269317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.470 [2024-09-29 21:48:35.271158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.470 [2024-09-29 21:48:35.271192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.470 pt2 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.470 [2024-09-29 21:48:35.281279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:16.470 [2024-09-29 21:48:35.282893] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.470 [2024-09-29 21:48:35.283062] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:16.470 [2024-09-29 21:48:35.283076] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:16.470 [2024-09-29 21:48:35.283280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:16.470 [2024-09-29 21:48:35.283431] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:16.470 [2024-09-29 21:48:35.283449] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:16.470 [2024-09-29 21:48:35.283576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.470 "name": "raid_bdev1", 00:17:16.470 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:16.470 "strip_size_kb": 0, 00:17:16.470 "state": "online", 00:17:16.470 "raid_level": "raid1", 00:17:16.470 "superblock": true, 00:17:16.470 "num_base_bdevs": 2, 00:17:16.470 "num_base_bdevs_discovered": 2, 00:17:16.470 "num_base_bdevs_operational": 2, 00:17:16.470 "base_bdevs_list": [ 00:17:16.470 { 00:17:16.470 "name": "pt1", 00:17:16.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.470 "is_configured": true, 00:17:16.470 "data_offset": 256, 00:17:16.470 "data_size": 7936 00:17:16.470 }, 00:17:16.470 { 00:17:16.470 "name": "pt2", 00:17:16.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.470 "is_configured": true, 00:17:16.470 "data_offset": 256, 00:17:16.470 "data_size": 7936 00:17:16.470 } 00:17:16.470 ] 00:17:16.470 }' 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.470 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:17.041 [2024-09-29 21:48:35.756666] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:17.041 "name": "raid_bdev1", 00:17:17.041 "aliases": [ 00:17:17.041 "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44" 00:17:17.041 ], 00:17:17.041 "product_name": "Raid Volume", 00:17:17.041 "block_size": 4096, 00:17:17.041 "num_blocks": 7936, 00:17:17.041 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:17.041 "assigned_rate_limits": { 00:17:17.041 "rw_ios_per_sec": 0, 00:17:17.041 "rw_mbytes_per_sec": 0, 00:17:17.041 "r_mbytes_per_sec": 0, 00:17:17.041 "w_mbytes_per_sec": 0 00:17:17.041 }, 00:17:17.041 "claimed": false, 00:17:17.041 "zoned": false, 00:17:17.041 "supported_io_types": { 00:17:17.041 "read": true, 00:17:17.041 "write": true, 00:17:17.041 "unmap": false, 00:17:17.041 "flush": false, 00:17:17.041 "reset": true, 00:17:17.041 "nvme_admin": false, 00:17:17.041 "nvme_io": false, 00:17:17.041 "nvme_io_md": false, 00:17:17.041 "write_zeroes": true, 00:17:17.041 "zcopy": false, 00:17:17.041 "get_zone_info": false, 00:17:17.041 "zone_management": false, 00:17:17.041 "zone_append": false, 00:17:17.041 "compare": false, 00:17:17.041 "compare_and_write": false, 00:17:17.041 "abort": false, 00:17:17.041 "seek_hole": false, 00:17:17.041 "seek_data": false, 00:17:17.041 "copy": false, 00:17:17.041 "nvme_iov_md": false 00:17:17.041 }, 00:17:17.041 "memory_domains": [ 00:17:17.041 { 00:17:17.041 "dma_device_id": "system", 00:17:17.041 "dma_device_type": 1 00:17:17.041 }, 00:17:17.041 { 00:17:17.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.041 "dma_device_type": 2 00:17:17.041 }, 00:17:17.041 { 00:17:17.041 "dma_device_id": "system", 00:17:17.041 "dma_device_type": 1 00:17:17.041 }, 00:17:17.041 { 00:17:17.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.041 "dma_device_type": 2 00:17:17.041 } 00:17:17.041 ], 00:17:17.041 "driver_specific": { 00:17:17.041 "raid": { 00:17:17.041 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:17.041 "strip_size_kb": 0, 00:17:17.041 "state": "online", 00:17:17.041 "raid_level": "raid1", 00:17:17.041 "superblock": true, 00:17:17.041 "num_base_bdevs": 2, 00:17:17.041 "num_base_bdevs_discovered": 2, 00:17:17.041 "num_base_bdevs_operational": 2, 00:17:17.041 "base_bdevs_list": [ 00:17:17.041 { 00:17:17.041 "name": "pt1", 00:17:17.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.041 "is_configured": true, 00:17:17.041 "data_offset": 256, 00:17:17.041 "data_size": 7936 00:17:17.041 }, 00:17:17.041 { 00:17:17.041 "name": "pt2", 00:17:17.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.041 "is_configured": true, 00:17:17.041 "data_offset": 256, 00:17:17.041 "data_size": 7936 00:17:17.041 } 00:17:17.041 ] 00:17:17.041 } 00:17:17.041 } 00:17:17.041 }' 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:17.041 pt2' 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:17.041 [2024-09-29 21:48:35.988278] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.041 21:48:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9b237c7b-10d3-4dba-8b7a-bc267c4fbb44 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 9b237c7b-10d3-4dba-8b7a-bc267c4fbb44 ']' 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.302 [2024-09-29 21:48:36.035960] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.302 [2024-09-29 21:48:36.035983] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.302 [2024-09-29 21:48:36.036109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.302 [2024-09-29 21:48:36.036159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.302 [2024-09-29 21:48:36.036179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.302 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.302 [2024-09-29 21:48:36.175726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:17.302 [2024-09-29 21:48:36.177483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:17.302 [2024-09-29 21:48:36.177548] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:17.302 [2024-09-29 21:48:36.177590] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:17.302 [2024-09-29 21:48:36.177603] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.302 [2024-09-29 21:48:36.177612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:17.302 request: 00:17:17.302 { 00:17:17.302 "name": "raid_bdev1", 00:17:17.302 "raid_level": "raid1", 00:17:17.302 "base_bdevs": [ 00:17:17.302 "malloc1", 00:17:17.302 "malloc2" 00:17:17.302 ], 00:17:17.302 "superblock": false, 00:17:17.302 "method": "bdev_raid_create", 00:17:17.302 "req_id": 1 00:17:17.302 } 00:17:17.302 Got JSON-RPC error response 00:17:17.302 response: 00:17:17.302 { 00:17:17.302 "code": -17, 00:17:17.302 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:17.303 } 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.303 [2024-09-29 21:48:36.239595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:17.303 [2024-09-29 21:48:36.239641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.303 [2024-09-29 21:48:36.239654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:17.303 [2024-09-29 21:48:36.239663] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.303 [2024-09-29 21:48:36.241689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.303 [2024-09-29 21:48:36.241727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:17.303 [2024-09-29 21:48:36.241786] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:17.303 [2024-09-29 21:48:36.241841] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:17.303 pt1 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.303 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.562 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.562 "name": "raid_bdev1", 00:17:17.562 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:17.562 "strip_size_kb": 0, 00:17:17.562 "state": "configuring", 00:17:17.562 "raid_level": "raid1", 00:17:17.562 "superblock": true, 00:17:17.562 "num_base_bdevs": 2, 00:17:17.562 "num_base_bdevs_discovered": 1, 00:17:17.562 "num_base_bdevs_operational": 2, 00:17:17.562 "base_bdevs_list": [ 00:17:17.562 { 00:17:17.562 "name": "pt1", 00:17:17.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.562 "is_configured": true, 00:17:17.562 "data_offset": 256, 00:17:17.562 "data_size": 7936 00:17:17.562 }, 00:17:17.562 { 00:17:17.562 "name": null, 00:17:17.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.562 "is_configured": false, 00:17:17.562 "data_offset": 256, 00:17:17.562 "data_size": 7936 00:17:17.562 } 00:17:17.562 ] 00:17:17.562 }' 00:17:17.562 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.562 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.821 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:17.821 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:17.821 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:17.821 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:17.821 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.821 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.821 [2024-09-29 21:48:36.710768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:17.821 [2024-09-29 21:48:36.710818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.821 [2024-09-29 21:48:36.710833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:17.821 [2024-09-29 21:48:36.710841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.821 [2024-09-29 21:48:36.711205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.821 [2024-09-29 21:48:36.711233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:17.821 [2024-09-29 21:48:36.711284] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:17.821 [2024-09-29 21:48:36.711302] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.821 [2024-09-29 21:48:36.711407] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:17.821 [2024-09-29 21:48:36.711424] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:17.821 [2024-09-29 21:48:36.711632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:17.822 [2024-09-29 21:48:36.711788] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:17.822 [2024-09-29 21:48:36.711805] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:17.822 [2024-09-29 21:48:36.711931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.822 pt2 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.822 "name": "raid_bdev1", 00:17:17.822 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:17.822 "strip_size_kb": 0, 00:17:17.822 "state": "online", 00:17:17.822 "raid_level": "raid1", 00:17:17.822 "superblock": true, 00:17:17.822 "num_base_bdevs": 2, 00:17:17.822 "num_base_bdevs_discovered": 2, 00:17:17.822 "num_base_bdevs_operational": 2, 00:17:17.822 "base_bdevs_list": [ 00:17:17.822 { 00:17:17.822 "name": "pt1", 00:17:17.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.822 "is_configured": true, 00:17:17.822 "data_offset": 256, 00:17:17.822 "data_size": 7936 00:17:17.822 }, 00:17:17.822 { 00:17:17.822 "name": "pt2", 00:17:17.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.822 "is_configured": true, 00:17:17.822 "data_offset": 256, 00:17:17.822 "data_size": 7936 00:17:17.822 } 00:17:17.822 ] 00:17:17.822 }' 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.822 21:48:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.392 [2024-09-29 21:48:37.154234] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:18.392 "name": "raid_bdev1", 00:17:18.392 "aliases": [ 00:17:18.392 "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44" 00:17:18.392 ], 00:17:18.392 "product_name": "Raid Volume", 00:17:18.392 "block_size": 4096, 00:17:18.392 "num_blocks": 7936, 00:17:18.392 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:18.392 "assigned_rate_limits": { 00:17:18.392 "rw_ios_per_sec": 0, 00:17:18.392 "rw_mbytes_per_sec": 0, 00:17:18.392 "r_mbytes_per_sec": 0, 00:17:18.392 "w_mbytes_per_sec": 0 00:17:18.392 }, 00:17:18.392 "claimed": false, 00:17:18.392 "zoned": false, 00:17:18.392 "supported_io_types": { 00:17:18.392 "read": true, 00:17:18.392 "write": true, 00:17:18.392 "unmap": false, 00:17:18.392 "flush": false, 00:17:18.392 "reset": true, 00:17:18.392 "nvme_admin": false, 00:17:18.392 "nvme_io": false, 00:17:18.392 "nvme_io_md": false, 00:17:18.392 "write_zeroes": true, 00:17:18.392 "zcopy": false, 00:17:18.392 "get_zone_info": false, 00:17:18.392 "zone_management": false, 00:17:18.392 "zone_append": false, 00:17:18.392 "compare": false, 00:17:18.392 "compare_and_write": false, 00:17:18.392 "abort": false, 00:17:18.392 "seek_hole": false, 00:17:18.392 "seek_data": false, 00:17:18.392 "copy": false, 00:17:18.392 "nvme_iov_md": false 00:17:18.392 }, 00:17:18.392 "memory_domains": [ 00:17:18.392 { 00:17:18.392 "dma_device_id": "system", 00:17:18.392 "dma_device_type": 1 00:17:18.392 }, 00:17:18.392 { 00:17:18.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.392 "dma_device_type": 2 00:17:18.392 }, 00:17:18.392 { 00:17:18.392 "dma_device_id": "system", 00:17:18.392 "dma_device_type": 1 00:17:18.392 }, 00:17:18.392 { 00:17:18.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.392 "dma_device_type": 2 00:17:18.392 } 00:17:18.392 ], 00:17:18.392 "driver_specific": { 00:17:18.392 "raid": { 00:17:18.392 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:18.392 "strip_size_kb": 0, 00:17:18.392 "state": "online", 00:17:18.392 "raid_level": "raid1", 00:17:18.392 "superblock": true, 00:17:18.392 "num_base_bdevs": 2, 00:17:18.392 "num_base_bdevs_discovered": 2, 00:17:18.392 "num_base_bdevs_operational": 2, 00:17:18.392 "base_bdevs_list": [ 00:17:18.392 { 00:17:18.392 "name": "pt1", 00:17:18.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:18.392 "is_configured": true, 00:17:18.392 "data_offset": 256, 00:17:18.392 "data_size": 7936 00:17:18.392 }, 00:17:18.392 { 00:17:18.392 "name": "pt2", 00:17:18.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.392 "is_configured": true, 00:17:18.392 "data_offset": 256, 00:17:18.392 "data_size": 7936 00:17:18.392 } 00:17:18.392 ] 00:17:18.392 } 00:17:18.392 } 00:17:18.392 }' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:18.392 pt2' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:18.392 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:18.652 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.652 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:18.652 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.652 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.652 [2024-09-29 21:48:37.385799] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.652 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.652 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 9b237c7b-10d3-4dba-8b7a-bc267c4fbb44 '!=' 9b237c7b-10d3-4dba-8b7a-bc267c4fbb44 ']' 00:17:18.652 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:18.652 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.653 [2024-09-29 21:48:37.433546] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.653 "name": "raid_bdev1", 00:17:18.653 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:18.653 "strip_size_kb": 0, 00:17:18.653 "state": "online", 00:17:18.653 "raid_level": "raid1", 00:17:18.653 "superblock": true, 00:17:18.653 "num_base_bdevs": 2, 00:17:18.653 "num_base_bdevs_discovered": 1, 00:17:18.653 "num_base_bdevs_operational": 1, 00:17:18.653 "base_bdevs_list": [ 00:17:18.653 { 00:17:18.653 "name": null, 00:17:18.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.653 "is_configured": false, 00:17:18.653 "data_offset": 0, 00:17:18.653 "data_size": 7936 00:17:18.653 }, 00:17:18.653 { 00:17:18.653 "name": "pt2", 00:17:18.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.653 "is_configured": true, 00:17:18.653 "data_offset": 256, 00:17:18.653 "data_size": 7936 00:17:18.653 } 00:17:18.653 ] 00:17:18.653 }' 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.653 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 [2024-09-29 21:48:37.920759] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.225 [2024-09-29 21:48:37.920786] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.225 [2024-09-29 21:48:37.920837] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.225 [2024-09-29 21:48:37.920874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.225 [2024-09-29 21:48:37.920889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 [2024-09-29 21:48:37.992652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:19.225 [2024-09-29 21:48:37.992704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.225 [2024-09-29 21:48:37.992717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:19.225 [2024-09-29 21:48:37.992727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.225 [2024-09-29 21:48:37.994740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.225 [2024-09-29 21:48:37.994777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:19.225 [2024-09-29 21:48:37.994841] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:19.225 [2024-09-29 21:48:37.994882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.225 [2024-09-29 21:48:37.994978] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:19.225 [2024-09-29 21:48:37.994993] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.225 [2024-09-29 21:48:37.995223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:19.225 [2024-09-29 21:48:37.995370] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:19.225 [2024-09-29 21:48:37.995385] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:19.225 [2024-09-29 21:48:37.995517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.225 pt2 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.225 21:48:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.225 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.225 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.225 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.225 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.225 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.225 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.225 "name": "raid_bdev1", 00:17:19.225 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:19.225 "strip_size_kb": 0, 00:17:19.225 "state": "online", 00:17:19.225 "raid_level": "raid1", 00:17:19.225 "superblock": true, 00:17:19.225 "num_base_bdevs": 2, 00:17:19.225 "num_base_bdevs_discovered": 1, 00:17:19.225 "num_base_bdevs_operational": 1, 00:17:19.225 "base_bdevs_list": [ 00:17:19.225 { 00:17:19.225 "name": null, 00:17:19.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.225 "is_configured": false, 00:17:19.225 "data_offset": 256, 00:17:19.225 "data_size": 7936 00:17:19.225 }, 00:17:19.225 { 00:17:19.225 "name": "pt2", 00:17:19.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.225 "is_configured": true, 00:17:19.225 "data_offset": 256, 00:17:19.225 "data_size": 7936 00:17:19.225 } 00:17:19.225 ] 00:17:19.225 }' 00:17:19.225 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.225 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.485 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.485 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.485 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.745 [2024-09-29 21:48:38.471893] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.745 [2024-09-29 21:48:38.471921] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.745 [2024-09-29 21:48:38.471979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.745 [2024-09-29 21:48:38.472014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.745 [2024-09-29 21:48:38.472022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.745 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.745 [2024-09-29 21:48:38.531810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.745 [2024-09-29 21:48:38.531853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.745 [2024-09-29 21:48:38.531868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:19.746 [2024-09-29 21:48:38.531877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.746 [2024-09-29 21:48:38.533870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.746 [2024-09-29 21:48:38.533908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.746 [2024-09-29 21:48:38.533981] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:19.746 [2024-09-29 21:48:38.534028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.746 [2024-09-29 21:48:38.534139] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:19.746 [2024-09-29 21:48:38.534149] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.746 [2024-09-29 21:48:38.534164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:19.746 [2024-09-29 21:48:38.534236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.746 [2024-09-29 21:48:38.534322] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:19.746 [2024-09-29 21:48:38.534329] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.746 [2024-09-29 21:48:38.534534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:19.746 [2024-09-29 21:48:38.534670] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:19.746 [2024-09-29 21:48:38.534689] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:19.746 [2024-09-29 21:48:38.534816] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.746 pt1 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.746 "name": "raid_bdev1", 00:17:19.746 "uuid": "9b237c7b-10d3-4dba-8b7a-bc267c4fbb44", 00:17:19.746 "strip_size_kb": 0, 00:17:19.746 "state": "online", 00:17:19.746 "raid_level": "raid1", 00:17:19.746 "superblock": true, 00:17:19.746 "num_base_bdevs": 2, 00:17:19.746 "num_base_bdevs_discovered": 1, 00:17:19.746 "num_base_bdevs_operational": 1, 00:17:19.746 "base_bdevs_list": [ 00:17:19.746 { 00:17:19.746 "name": null, 00:17:19.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.746 "is_configured": false, 00:17:19.746 "data_offset": 256, 00:17:19.746 "data_size": 7936 00:17:19.746 }, 00:17:19.746 { 00:17:19.746 "name": "pt2", 00:17:19.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.746 "is_configured": true, 00:17:19.746 "data_offset": 256, 00:17:19.746 "data_size": 7936 00:17:19.746 } 00:17:19.746 ] 00:17:19.746 }' 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.746 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.006 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:20.006 21:48:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:20.006 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.006 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.266 21:48:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.266 [2024-09-29 21:48:39.031135] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 9b237c7b-10d3-4dba-8b7a-bc267c4fbb44 '!=' 9b237c7b-10d3-4dba-8b7a-bc267c4fbb44 ']' 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86223 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86223 ']' 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86223 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86223 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86223' 00:17:20.266 killing process with pid 86223 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86223 00:17:20.266 [2024-09-29 21:48:39.104826] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.266 [2024-09-29 21:48:39.104902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.266 [2024-09-29 21:48:39.104941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.266 [2024-09-29 21:48:39.104959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:20.266 21:48:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86223 00:17:20.526 [2024-09-29 21:48:39.298336] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.909 21:48:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:21.909 00:17:21.909 real 0m6.206s 00:17:21.909 user 0m9.358s 00:17:21.909 sys 0m1.155s 00:17:21.909 21:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:21.909 21:48:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 ************************************ 00:17:21.909 END TEST raid_superblock_test_4k 00:17:21.909 ************************************ 00:17:21.909 21:48:40 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:21.909 21:48:40 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:21.909 21:48:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:21.909 21:48:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:21.909 21:48:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:21.909 ************************************ 00:17:21.909 START TEST raid_rebuild_test_sb_4k 00:17:21.909 ************************************ 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:21.909 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86551 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86551 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86551 ']' 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:21.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:21.910 21:48:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.910 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:21.910 Zero copy mechanism will not be used. 00:17:21.910 [2024-09-29 21:48:40.654052] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:21.910 [2024-09-29 21:48:40.654178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86551 ] 00:17:21.910 [2024-09-29 21:48:40.815500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.170 [2024-09-29 21:48:41.004121] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.429 [2024-09-29 21:48:41.194988] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.429 [2024-09-29 21:48:41.195060] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.690 BaseBdev1_malloc 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.690 [2024-09-29 21:48:41.527431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:22.690 [2024-09-29 21:48:41.527495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.690 [2024-09-29 21:48:41.527518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:22.690 [2024-09-29 21:48:41.527531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.690 [2024-09-29 21:48:41.529513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.690 [2024-09-29 21:48:41.529553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:22.690 BaseBdev1 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.690 BaseBdev2_malloc 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.690 [2024-09-29 21:48:41.612469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:22.690 [2024-09-29 21:48:41.612539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.690 [2024-09-29 21:48:41.612558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:22.690 [2024-09-29 21:48:41.612569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.690 [2024-09-29 21:48:41.614490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.690 [2024-09-29 21:48:41.614527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:22.690 BaseBdev2 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.690 spare_malloc 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.690 spare_delay 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.690 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.950 [2024-09-29 21:48:41.677591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:22.950 [2024-09-29 21:48:41.677645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.950 [2024-09-29 21:48:41.677661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:22.950 [2024-09-29 21:48:41.677671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.950 [2024-09-29 21:48:41.679545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.950 [2024-09-29 21:48:41.679584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:22.950 spare 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.950 [2024-09-29 21:48:41.689614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.950 [2024-09-29 21:48:41.691233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.950 [2024-09-29 21:48:41.691394] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:22.950 [2024-09-29 21:48:41.691408] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:22.950 [2024-09-29 21:48:41.691632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:22.950 [2024-09-29 21:48:41.691790] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:22.950 [2024-09-29 21:48:41.691805] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:22.950 [2024-09-29 21:48:41.691943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.950 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.951 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.951 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.951 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.951 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.951 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.951 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.951 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.951 "name": "raid_bdev1", 00:17:22.951 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:22.951 "strip_size_kb": 0, 00:17:22.951 "state": "online", 00:17:22.951 "raid_level": "raid1", 00:17:22.951 "superblock": true, 00:17:22.951 "num_base_bdevs": 2, 00:17:22.951 "num_base_bdevs_discovered": 2, 00:17:22.951 "num_base_bdevs_operational": 2, 00:17:22.951 "base_bdevs_list": [ 00:17:22.951 { 00:17:22.951 "name": "BaseBdev1", 00:17:22.951 "uuid": "5e2f205b-645e-5e55-97f2-5d98ffff8a23", 00:17:22.951 "is_configured": true, 00:17:22.951 "data_offset": 256, 00:17:22.951 "data_size": 7936 00:17:22.951 }, 00:17:22.951 { 00:17:22.951 "name": "BaseBdev2", 00:17:22.951 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:22.951 "is_configured": true, 00:17:22.951 "data_offset": 256, 00:17:22.951 "data_size": 7936 00:17:22.951 } 00:17:22.951 ] 00:17:22.951 }' 00:17:22.951 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.951 21:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.211 [2024-09-29 21:48:42.145014] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:23.211 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.471 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:23.471 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:23.471 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:23.471 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:23.471 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:23.472 [2024-09-29 21:48:42.392379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:23.472 /dev/nbd0 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:23.472 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.732 1+0 records in 00:17:23.732 1+0 records out 00:17:23.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410471 s, 10.0 MB/s 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:23.732 21:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:24.305 7936+0 records in 00:17:24.305 7936+0 records out 00:17:24.305 32505856 bytes (33 MB, 31 MiB) copied, 0.604363 s, 53.8 MB/s 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:24.305 [2024-09-29 21:48:43.282526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:24.305 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.585 [2024-09-29 21:48:43.298570] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.585 "name": "raid_bdev1", 00:17:24.585 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:24.585 "strip_size_kb": 0, 00:17:24.585 "state": "online", 00:17:24.585 "raid_level": "raid1", 00:17:24.585 "superblock": true, 00:17:24.585 "num_base_bdevs": 2, 00:17:24.585 "num_base_bdevs_discovered": 1, 00:17:24.585 "num_base_bdevs_operational": 1, 00:17:24.585 "base_bdevs_list": [ 00:17:24.585 { 00:17:24.585 "name": null, 00:17:24.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.585 "is_configured": false, 00:17:24.585 "data_offset": 0, 00:17:24.585 "data_size": 7936 00:17:24.585 }, 00:17:24.585 { 00:17:24.585 "name": "BaseBdev2", 00:17:24.585 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:24.585 "is_configured": true, 00:17:24.585 "data_offset": 256, 00:17:24.585 "data_size": 7936 00:17:24.585 } 00:17:24.585 ] 00:17:24.585 }' 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.585 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.886 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:24.886 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.886 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.886 [2024-09-29 21:48:43.713871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.886 [2024-09-29 21:48:43.726859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:24.886 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.886 21:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:24.886 [2024-09-29 21:48:43.728545] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.840 "name": "raid_bdev1", 00:17:25.840 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:25.840 "strip_size_kb": 0, 00:17:25.840 "state": "online", 00:17:25.840 "raid_level": "raid1", 00:17:25.840 "superblock": true, 00:17:25.840 "num_base_bdevs": 2, 00:17:25.840 "num_base_bdevs_discovered": 2, 00:17:25.840 "num_base_bdevs_operational": 2, 00:17:25.840 "process": { 00:17:25.840 "type": "rebuild", 00:17:25.840 "target": "spare", 00:17:25.840 "progress": { 00:17:25.840 "blocks": 2560, 00:17:25.840 "percent": 32 00:17:25.840 } 00:17:25.840 }, 00:17:25.840 "base_bdevs_list": [ 00:17:25.840 { 00:17:25.840 "name": "spare", 00:17:25.840 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:25.840 "is_configured": true, 00:17:25.840 "data_offset": 256, 00:17:25.840 "data_size": 7936 00:17:25.840 }, 00:17:25.840 { 00:17:25.840 "name": "BaseBdev2", 00:17:25.840 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:25.840 "is_configured": true, 00:17:25.840 "data_offset": 256, 00:17:25.840 "data_size": 7936 00:17:25.840 } 00:17:25.840 ] 00:17:25.840 }' 00:17:25.840 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.100 [2024-09-29 21:48:44.864590] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.100 [2024-09-29 21:48:44.932961] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.100 [2024-09-29 21:48:44.933020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.100 [2024-09-29 21:48:44.933042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.100 [2024-09-29 21:48:44.933053] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.100 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.101 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.101 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.101 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.101 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.101 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.101 21:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.101 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.101 "name": "raid_bdev1", 00:17:26.101 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:26.101 "strip_size_kb": 0, 00:17:26.101 "state": "online", 00:17:26.101 "raid_level": "raid1", 00:17:26.101 "superblock": true, 00:17:26.101 "num_base_bdevs": 2, 00:17:26.101 "num_base_bdevs_discovered": 1, 00:17:26.101 "num_base_bdevs_operational": 1, 00:17:26.101 "base_bdevs_list": [ 00:17:26.101 { 00:17:26.101 "name": null, 00:17:26.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.101 "is_configured": false, 00:17:26.101 "data_offset": 0, 00:17:26.101 "data_size": 7936 00:17:26.101 }, 00:17:26.101 { 00:17:26.101 "name": "BaseBdev2", 00:17:26.101 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:26.101 "is_configured": true, 00:17:26.101 "data_offset": 256, 00:17:26.101 "data_size": 7936 00:17:26.101 } 00:17:26.101 ] 00:17:26.101 }' 00:17:26.101 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.101 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.671 "name": "raid_bdev1", 00:17:26.671 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:26.671 "strip_size_kb": 0, 00:17:26.671 "state": "online", 00:17:26.671 "raid_level": "raid1", 00:17:26.671 "superblock": true, 00:17:26.671 "num_base_bdevs": 2, 00:17:26.671 "num_base_bdevs_discovered": 1, 00:17:26.671 "num_base_bdevs_operational": 1, 00:17:26.671 "base_bdevs_list": [ 00:17:26.671 { 00:17:26.671 "name": null, 00:17:26.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.671 "is_configured": false, 00:17:26.671 "data_offset": 0, 00:17:26.671 "data_size": 7936 00:17:26.671 }, 00:17:26.671 { 00:17:26.671 "name": "BaseBdev2", 00:17:26.671 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:26.671 "is_configured": true, 00:17:26.671 "data_offset": 256, 00:17:26.671 "data_size": 7936 00:17:26.671 } 00:17:26.671 ] 00:17:26.671 }' 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.671 [2024-09-29 21:48:45.523439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.671 [2024-09-29 21:48:45.537491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.671 21:48:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:26.671 [2024-09-29 21:48:45.539140] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.610 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.611 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.611 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.611 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.611 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.611 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.611 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.611 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.611 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.611 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.871 "name": "raid_bdev1", 00:17:27.871 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:27.871 "strip_size_kb": 0, 00:17:27.871 "state": "online", 00:17:27.871 "raid_level": "raid1", 00:17:27.871 "superblock": true, 00:17:27.871 "num_base_bdevs": 2, 00:17:27.871 "num_base_bdevs_discovered": 2, 00:17:27.871 "num_base_bdevs_operational": 2, 00:17:27.871 "process": { 00:17:27.871 "type": "rebuild", 00:17:27.871 "target": "spare", 00:17:27.871 "progress": { 00:17:27.871 "blocks": 2560, 00:17:27.871 "percent": 32 00:17:27.871 } 00:17:27.871 }, 00:17:27.871 "base_bdevs_list": [ 00:17:27.871 { 00:17:27.871 "name": "spare", 00:17:27.871 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:27.871 "is_configured": true, 00:17:27.871 "data_offset": 256, 00:17:27.871 "data_size": 7936 00:17:27.871 }, 00:17:27.871 { 00:17:27.871 "name": "BaseBdev2", 00:17:27.871 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:27.871 "is_configured": true, 00:17:27.871 "data_offset": 256, 00:17:27.871 "data_size": 7936 00:17:27.871 } 00:17:27.871 ] 00:17:27.871 }' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:27.871 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=682 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.871 "name": "raid_bdev1", 00:17:27.871 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:27.871 "strip_size_kb": 0, 00:17:27.871 "state": "online", 00:17:27.871 "raid_level": "raid1", 00:17:27.871 "superblock": true, 00:17:27.871 "num_base_bdevs": 2, 00:17:27.871 "num_base_bdevs_discovered": 2, 00:17:27.871 "num_base_bdevs_operational": 2, 00:17:27.871 "process": { 00:17:27.871 "type": "rebuild", 00:17:27.871 "target": "spare", 00:17:27.871 "progress": { 00:17:27.871 "blocks": 2816, 00:17:27.871 "percent": 35 00:17:27.871 } 00:17:27.871 }, 00:17:27.871 "base_bdevs_list": [ 00:17:27.871 { 00:17:27.871 "name": "spare", 00:17:27.871 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:27.871 "is_configured": true, 00:17:27.871 "data_offset": 256, 00:17:27.871 "data_size": 7936 00:17:27.871 }, 00:17:27.871 { 00:17:27.871 "name": "BaseBdev2", 00:17:27.871 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:27.871 "is_configured": true, 00:17:27.871 "data_offset": 256, 00:17:27.871 "data_size": 7936 00:17:27.871 } 00:17:27.871 ] 00:17:27.871 }' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.871 21:48:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.812 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.072 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.072 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.072 "name": "raid_bdev1", 00:17:29.072 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:29.072 "strip_size_kb": 0, 00:17:29.072 "state": "online", 00:17:29.072 "raid_level": "raid1", 00:17:29.072 "superblock": true, 00:17:29.072 "num_base_bdevs": 2, 00:17:29.072 "num_base_bdevs_discovered": 2, 00:17:29.072 "num_base_bdevs_operational": 2, 00:17:29.072 "process": { 00:17:29.072 "type": "rebuild", 00:17:29.072 "target": "spare", 00:17:29.072 "progress": { 00:17:29.072 "blocks": 5632, 00:17:29.072 "percent": 70 00:17:29.072 } 00:17:29.072 }, 00:17:29.072 "base_bdevs_list": [ 00:17:29.072 { 00:17:29.072 "name": "spare", 00:17:29.072 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:29.072 "is_configured": true, 00:17:29.072 "data_offset": 256, 00:17:29.072 "data_size": 7936 00:17:29.072 }, 00:17:29.072 { 00:17:29.072 "name": "BaseBdev2", 00:17:29.072 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:29.072 "is_configured": true, 00:17:29.072 "data_offset": 256, 00:17:29.072 "data_size": 7936 00:17:29.072 } 00:17:29.072 ] 00:17:29.072 }' 00:17:29.072 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.072 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:29.072 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.072 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:29.072 21:48:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.012 [2024-09-29 21:48:48.650193] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:30.012 [2024-09-29 21:48:48.650272] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:30.012 [2024-09-29 21:48:48.650363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.012 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.012 "name": "raid_bdev1", 00:17:30.012 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:30.012 "strip_size_kb": 0, 00:17:30.012 "state": "online", 00:17:30.012 "raid_level": "raid1", 00:17:30.012 "superblock": true, 00:17:30.012 "num_base_bdevs": 2, 00:17:30.012 "num_base_bdevs_discovered": 2, 00:17:30.012 "num_base_bdevs_operational": 2, 00:17:30.012 "base_bdevs_list": [ 00:17:30.012 { 00:17:30.012 "name": "spare", 00:17:30.013 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:30.013 "is_configured": true, 00:17:30.013 "data_offset": 256, 00:17:30.013 "data_size": 7936 00:17:30.013 }, 00:17:30.013 { 00:17:30.013 "name": "BaseBdev2", 00:17:30.013 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:30.013 "is_configured": true, 00:17:30.013 "data_offset": 256, 00:17:30.013 "data_size": 7936 00:17:30.013 } 00:17:30.013 ] 00:17:30.013 }' 00:17:30.013 21:48:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.276 "name": "raid_bdev1", 00:17:30.276 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:30.276 "strip_size_kb": 0, 00:17:30.276 "state": "online", 00:17:30.276 "raid_level": "raid1", 00:17:30.276 "superblock": true, 00:17:30.276 "num_base_bdevs": 2, 00:17:30.276 "num_base_bdevs_discovered": 2, 00:17:30.276 "num_base_bdevs_operational": 2, 00:17:30.276 "base_bdevs_list": [ 00:17:30.276 { 00:17:30.276 "name": "spare", 00:17:30.276 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:30.276 "is_configured": true, 00:17:30.276 "data_offset": 256, 00:17:30.276 "data_size": 7936 00:17:30.276 }, 00:17:30.276 { 00:17:30.276 "name": "BaseBdev2", 00:17:30.276 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:30.276 "is_configured": true, 00:17:30.276 "data_offset": 256, 00:17:30.276 "data_size": 7936 00:17:30.276 } 00:17:30.276 ] 00:17:30.276 }' 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.276 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.538 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.538 "name": "raid_bdev1", 00:17:30.538 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:30.538 "strip_size_kb": 0, 00:17:30.538 "state": "online", 00:17:30.538 "raid_level": "raid1", 00:17:30.538 "superblock": true, 00:17:30.538 "num_base_bdevs": 2, 00:17:30.538 "num_base_bdevs_discovered": 2, 00:17:30.538 "num_base_bdevs_operational": 2, 00:17:30.538 "base_bdevs_list": [ 00:17:30.538 { 00:17:30.538 "name": "spare", 00:17:30.538 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:30.538 "is_configured": true, 00:17:30.538 "data_offset": 256, 00:17:30.538 "data_size": 7936 00:17:30.538 }, 00:17:30.538 { 00:17:30.538 "name": "BaseBdev2", 00:17:30.538 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:30.538 "is_configured": true, 00:17:30.538 "data_offset": 256, 00:17:30.538 "data_size": 7936 00:17:30.538 } 00:17:30.539 ] 00:17:30.539 }' 00:17:30.539 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.539 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.797 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.797 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.797 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.798 [2024-09-29 21:48:49.747180] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.798 [2024-09-29 21:48:49.747214] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.798 [2024-09-29 21:48:49.747286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.798 [2024-09-29 21:48:49.747347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.798 [2024-09-29 21:48:49.747356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:30.798 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.798 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.798 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.798 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.798 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:30.798 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:31.056 21:48:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:31.056 /dev/nbd0 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.316 1+0 records in 00:17:31.316 1+0 records out 00:17:31.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411235 s, 10.0 MB/s 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:31.316 /dev/nbd1 00:17:31.316 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.575 1+0 records in 00:17:31.575 1+0 records out 00:17:31.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415516 s, 9.9 MB/s 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.575 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.834 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.094 [2024-09-29 21:48:50.938236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:32.094 [2024-09-29 21:48:50.938290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.094 [2024-09-29 21:48:50.938312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:32.094 [2024-09-29 21:48:50.938321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.094 [2024-09-29 21:48:50.940328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.094 [2024-09-29 21:48:50.940413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:32.094 [2024-09-29 21:48:50.940506] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:32.094 [2024-09-29 21:48:50.940557] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.094 [2024-09-29 21:48:50.940690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.094 spare 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.094 21:48:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.094 [2024-09-29 21:48:51.040584] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:32.094 [2024-09-29 21:48:51.040610] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:32.094 [2024-09-29 21:48:51.040841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:32.094 [2024-09-29 21:48:51.040999] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:32.094 [2024-09-29 21:48:51.041009] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:32.094 [2024-09-29 21:48:51.041176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.094 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.354 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.354 "name": "raid_bdev1", 00:17:32.354 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:32.354 "strip_size_kb": 0, 00:17:32.354 "state": "online", 00:17:32.354 "raid_level": "raid1", 00:17:32.354 "superblock": true, 00:17:32.354 "num_base_bdevs": 2, 00:17:32.354 "num_base_bdevs_discovered": 2, 00:17:32.354 "num_base_bdevs_operational": 2, 00:17:32.354 "base_bdevs_list": [ 00:17:32.354 { 00:17:32.354 "name": "spare", 00:17:32.354 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:32.354 "is_configured": true, 00:17:32.354 "data_offset": 256, 00:17:32.354 "data_size": 7936 00:17:32.354 }, 00:17:32.354 { 00:17:32.354 "name": "BaseBdev2", 00:17:32.354 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:32.354 "is_configured": true, 00:17:32.354 "data_offset": 256, 00:17:32.354 "data_size": 7936 00:17:32.354 } 00:17:32.354 ] 00:17:32.354 }' 00:17:32.354 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.354 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.614 "name": "raid_bdev1", 00:17:32.614 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:32.614 "strip_size_kb": 0, 00:17:32.614 "state": "online", 00:17:32.614 "raid_level": "raid1", 00:17:32.614 "superblock": true, 00:17:32.614 "num_base_bdevs": 2, 00:17:32.614 "num_base_bdevs_discovered": 2, 00:17:32.614 "num_base_bdevs_operational": 2, 00:17:32.614 "base_bdevs_list": [ 00:17:32.614 { 00:17:32.614 "name": "spare", 00:17:32.614 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:32.614 "is_configured": true, 00:17:32.614 "data_offset": 256, 00:17:32.614 "data_size": 7936 00:17:32.614 }, 00:17:32.614 { 00:17:32.614 "name": "BaseBdev2", 00:17:32.614 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:32.614 "is_configured": true, 00:17:32.614 "data_offset": 256, 00:17:32.614 "data_size": 7936 00:17:32.614 } 00:17:32.614 ] 00:17:32.614 }' 00:17:32.614 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.874 [2024-09-29 21:48:51.728963] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.874 "name": "raid_bdev1", 00:17:32.874 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:32.874 "strip_size_kb": 0, 00:17:32.874 "state": "online", 00:17:32.874 "raid_level": "raid1", 00:17:32.874 "superblock": true, 00:17:32.874 "num_base_bdevs": 2, 00:17:32.874 "num_base_bdevs_discovered": 1, 00:17:32.874 "num_base_bdevs_operational": 1, 00:17:32.874 "base_bdevs_list": [ 00:17:32.874 { 00:17:32.874 "name": null, 00:17:32.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.874 "is_configured": false, 00:17:32.874 "data_offset": 0, 00:17:32.874 "data_size": 7936 00:17:32.874 }, 00:17:32.874 { 00:17:32.874 "name": "BaseBdev2", 00:17:32.874 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:32.874 "is_configured": true, 00:17:32.874 "data_offset": 256, 00:17:32.874 "data_size": 7936 00:17:32.874 } 00:17:32.874 ] 00:17:32.874 }' 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.874 21:48:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.442 21:48:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:33.442 21:48:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.442 21:48:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.442 [2024-09-29 21:48:52.184217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.442 [2024-09-29 21:48:52.184408] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:33.442 [2024-09-29 21:48:52.184471] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:33.442 [2024-09-29 21:48:52.184523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.442 [2024-09-29 21:48:52.198899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:33.442 21:48:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.442 [2024-09-29 21:48:52.200611] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:33.442 21:48:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.382 "name": "raid_bdev1", 00:17:34.382 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:34.382 "strip_size_kb": 0, 00:17:34.382 "state": "online", 00:17:34.382 "raid_level": "raid1", 00:17:34.382 "superblock": true, 00:17:34.382 "num_base_bdevs": 2, 00:17:34.382 "num_base_bdevs_discovered": 2, 00:17:34.382 "num_base_bdevs_operational": 2, 00:17:34.382 "process": { 00:17:34.382 "type": "rebuild", 00:17:34.382 "target": "spare", 00:17:34.382 "progress": { 00:17:34.382 "blocks": 2560, 00:17:34.382 "percent": 32 00:17:34.382 } 00:17:34.382 }, 00:17:34.382 "base_bdevs_list": [ 00:17:34.382 { 00:17:34.382 "name": "spare", 00:17:34.382 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:34.382 "is_configured": true, 00:17:34.382 "data_offset": 256, 00:17:34.382 "data_size": 7936 00:17:34.382 }, 00:17:34.382 { 00:17:34.382 "name": "BaseBdev2", 00:17:34.382 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:34.382 "is_configured": true, 00:17:34.382 "data_offset": 256, 00:17:34.382 "data_size": 7936 00:17:34.382 } 00:17:34.382 ] 00:17:34.382 }' 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.382 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.382 [2024-09-29 21:48:53.364456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.642 [2024-09-29 21:48:53.405084] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:34.642 [2024-09-29 21:48:53.405157] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.642 [2024-09-29 21:48:53.405170] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:34.642 [2024-09-29 21:48:53.405179] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.642 "name": "raid_bdev1", 00:17:34.642 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:34.642 "strip_size_kb": 0, 00:17:34.642 "state": "online", 00:17:34.642 "raid_level": "raid1", 00:17:34.642 "superblock": true, 00:17:34.642 "num_base_bdevs": 2, 00:17:34.642 "num_base_bdevs_discovered": 1, 00:17:34.642 "num_base_bdevs_operational": 1, 00:17:34.642 "base_bdevs_list": [ 00:17:34.642 { 00:17:34.642 "name": null, 00:17:34.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.642 "is_configured": false, 00:17:34.642 "data_offset": 0, 00:17:34.642 "data_size": 7936 00:17:34.642 }, 00:17:34.642 { 00:17:34.642 "name": "BaseBdev2", 00:17:34.642 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:34.642 "is_configured": true, 00:17:34.642 "data_offset": 256, 00:17:34.642 "data_size": 7936 00:17:34.642 } 00:17:34.642 ] 00:17:34.642 }' 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.642 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.211 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:35.211 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.211 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.211 [2024-09-29 21:48:53.922137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:35.211 [2024-09-29 21:48:53.922200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.211 [2024-09-29 21:48:53.922218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:35.211 [2024-09-29 21:48:53.922229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.211 [2024-09-29 21:48:53.922672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.211 [2024-09-29 21:48:53.922704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:35.211 [2024-09-29 21:48:53.922784] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:35.211 [2024-09-29 21:48:53.922805] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:35.211 [2024-09-29 21:48:53.922814] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:35.211 [2024-09-29 21:48:53.922835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.211 [2024-09-29 21:48:53.937657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:35.211 spare 00:17:35.211 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.211 21:48:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:35.211 [2024-09-29 21:48:53.939429] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.148 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.148 "name": "raid_bdev1", 00:17:36.148 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:36.148 "strip_size_kb": 0, 00:17:36.148 "state": "online", 00:17:36.148 "raid_level": "raid1", 00:17:36.148 "superblock": true, 00:17:36.148 "num_base_bdevs": 2, 00:17:36.148 "num_base_bdevs_discovered": 2, 00:17:36.148 "num_base_bdevs_operational": 2, 00:17:36.148 "process": { 00:17:36.148 "type": "rebuild", 00:17:36.148 "target": "spare", 00:17:36.148 "progress": { 00:17:36.148 "blocks": 2560, 00:17:36.148 "percent": 32 00:17:36.148 } 00:17:36.148 }, 00:17:36.148 "base_bdevs_list": [ 00:17:36.148 { 00:17:36.148 "name": "spare", 00:17:36.148 "uuid": "35ac012b-e4d1-5e76-8d7c-aa4ab34b5ba0", 00:17:36.149 "is_configured": true, 00:17:36.149 "data_offset": 256, 00:17:36.149 "data_size": 7936 00:17:36.149 }, 00:17:36.149 { 00:17:36.149 "name": "BaseBdev2", 00:17:36.149 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:36.149 "is_configured": true, 00:17:36.149 "data_offset": 256, 00:17:36.149 "data_size": 7936 00:17:36.149 } 00:17:36.149 ] 00:17:36.149 }' 00:17:36.149 21:48:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.149 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.149 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.149 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.149 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:36.149 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.149 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.149 [2024-09-29 21:48:55.103499] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.409 [2024-09-29 21:48:55.143878] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:36.409 [2024-09-29 21:48:55.143932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.409 [2024-09-29 21:48:55.143949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:36.409 [2024-09-29 21:48:55.143956] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.409 "name": "raid_bdev1", 00:17:36.409 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:36.409 "strip_size_kb": 0, 00:17:36.409 "state": "online", 00:17:36.409 "raid_level": "raid1", 00:17:36.409 "superblock": true, 00:17:36.409 "num_base_bdevs": 2, 00:17:36.409 "num_base_bdevs_discovered": 1, 00:17:36.409 "num_base_bdevs_operational": 1, 00:17:36.409 "base_bdevs_list": [ 00:17:36.409 { 00:17:36.409 "name": null, 00:17:36.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.409 "is_configured": false, 00:17:36.409 "data_offset": 0, 00:17:36.409 "data_size": 7936 00:17:36.409 }, 00:17:36.409 { 00:17:36.409 "name": "BaseBdev2", 00:17:36.409 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:36.409 "is_configured": true, 00:17:36.409 "data_offset": 256, 00:17:36.409 "data_size": 7936 00:17:36.409 } 00:17:36.409 ] 00:17:36.409 }' 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.409 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.669 "name": "raid_bdev1", 00:17:36.669 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:36.669 "strip_size_kb": 0, 00:17:36.669 "state": "online", 00:17:36.669 "raid_level": "raid1", 00:17:36.669 "superblock": true, 00:17:36.669 "num_base_bdevs": 2, 00:17:36.669 "num_base_bdevs_discovered": 1, 00:17:36.669 "num_base_bdevs_operational": 1, 00:17:36.669 "base_bdevs_list": [ 00:17:36.669 { 00:17:36.669 "name": null, 00:17:36.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.669 "is_configured": false, 00:17:36.669 "data_offset": 0, 00:17:36.669 "data_size": 7936 00:17:36.669 }, 00:17:36.669 { 00:17:36.669 "name": "BaseBdev2", 00:17:36.669 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:36.669 "is_configured": true, 00:17:36.669 "data_offset": 256, 00:17:36.669 "data_size": 7936 00:17:36.669 } 00:17:36.669 ] 00:17:36.669 }' 00:17:36.669 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.929 [2024-09-29 21:48:55.749397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:36.929 [2024-09-29 21:48:55.749447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.929 [2024-09-29 21:48:55.749468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:36.929 [2024-09-29 21:48:55.749476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.929 [2024-09-29 21:48:55.749891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.929 [2024-09-29 21:48:55.749917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:36.929 [2024-09-29 21:48:55.749987] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:36.929 [2024-09-29 21:48:55.750007] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:36.929 [2024-09-29 21:48:55.750019] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:36.929 [2024-09-29 21:48:55.750029] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:36.929 BaseBdev1 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.929 21:48:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.869 "name": "raid_bdev1", 00:17:37.869 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:37.869 "strip_size_kb": 0, 00:17:37.869 "state": "online", 00:17:37.869 "raid_level": "raid1", 00:17:37.869 "superblock": true, 00:17:37.869 "num_base_bdevs": 2, 00:17:37.869 "num_base_bdevs_discovered": 1, 00:17:37.869 "num_base_bdevs_operational": 1, 00:17:37.869 "base_bdevs_list": [ 00:17:37.869 { 00:17:37.869 "name": null, 00:17:37.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.869 "is_configured": false, 00:17:37.869 "data_offset": 0, 00:17:37.869 "data_size": 7936 00:17:37.869 }, 00:17:37.869 { 00:17:37.869 "name": "BaseBdev2", 00:17:37.869 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:37.869 "is_configured": true, 00:17:37.869 "data_offset": 256, 00:17:37.869 "data_size": 7936 00:17:37.869 } 00:17:37.869 ] 00:17:37.869 }' 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.869 21:48:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.439 "name": "raid_bdev1", 00:17:38.439 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:38.439 "strip_size_kb": 0, 00:17:38.439 "state": "online", 00:17:38.439 "raid_level": "raid1", 00:17:38.439 "superblock": true, 00:17:38.439 "num_base_bdevs": 2, 00:17:38.439 "num_base_bdevs_discovered": 1, 00:17:38.439 "num_base_bdevs_operational": 1, 00:17:38.439 "base_bdevs_list": [ 00:17:38.439 { 00:17:38.439 "name": null, 00:17:38.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.439 "is_configured": false, 00:17:38.439 "data_offset": 0, 00:17:38.439 "data_size": 7936 00:17:38.439 }, 00:17:38.439 { 00:17:38.439 "name": "BaseBdev2", 00:17:38.439 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:38.439 "is_configured": true, 00:17:38.439 "data_offset": 256, 00:17:38.439 "data_size": 7936 00:17:38.439 } 00:17:38.439 ] 00:17:38.439 }' 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.439 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.439 [2024-09-29 21:48:57.342692] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:38.439 [2024-09-29 21:48:57.342835] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:38.439 [2024-09-29 21:48:57.342855] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:38.439 request: 00:17:38.439 { 00:17:38.439 "base_bdev": "BaseBdev1", 00:17:38.439 "raid_bdev": "raid_bdev1", 00:17:38.439 "method": "bdev_raid_add_base_bdev", 00:17:38.439 "req_id": 1 00:17:38.440 } 00:17:38.440 Got JSON-RPC error response 00:17:38.440 response: 00:17:38.440 { 00:17:38.440 "code": -22, 00:17:38.440 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:38.440 } 00:17:38.440 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:38.440 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:17:38.440 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.440 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.440 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.440 21:48:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.380 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.639 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.639 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.639 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.639 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.639 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.639 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.639 "name": "raid_bdev1", 00:17:39.639 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:39.639 "strip_size_kb": 0, 00:17:39.639 "state": "online", 00:17:39.639 "raid_level": "raid1", 00:17:39.639 "superblock": true, 00:17:39.639 "num_base_bdevs": 2, 00:17:39.639 "num_base_bdevs_discovered": 1, 00:17:39.639 "num_base_bdevs_operational": 1, 00:17:39.639 "base_bdevs_list": [ 00:17:39.639 { 00:17:39.639 "name": null, 00:17:39.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.639 "is_configured": false, 00:17:39.639 "data_offset": 0, 00:17:39.639 "data_size": 7936 00:17:39.639 }, 00:17:39.639 { 00:17:39.639 "name": "BaseBdev2", 00:17:39.639 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:39.639 "is_configured": true, 00:17:39.639 "data_offset": 256, 00:17:39.639 "data_size": 7936 00:17:39.639 } 00:17:39.639 ] 00:17:39.639 }' 00:17:39.639 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.639 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.899 "name": "raid_bdev1", 00:17:39.899 "uuid": "f0783fef-a909-49c4-9e6e-6b444dad981b", 00:17:39.899 "strip_size_kb": 0, 00:17:39.899 "state": "online", 00:17:39.899 "raid_level": "raid1", 00:17:39.899 "superblock": true, 00:17:39.899 "num_base_bdevs": 2, 00:17:39.899 "num_base_bdevs_discovered": 1, 00:17:39.899 "num_base_bdevs_operational": 1, 00:17:39.899 "base_bdevs_list": [ 00:17:39.899 { 00:17:39.899 "name": null, 00:17:39.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.899 "is_configured": false, 00:17:39.899 "data_offset": 0, 00:17:39.899 "data_size": 7936 00:17:39.899 }, 00:17:39.899 { 00:17:39.899 "name": "BaseBdev2", 00:17:39.899 "uuid": "61bd86a5-1cc5-5eea-bb20-6f580b7db178", 00:17:39.899 "is_configured": true, 00:17:39.899 "data_offset": 256, 00:17:39.899 "data_size": 7936 00:17:39.899 } 00:17:39.899 ] 00:17:39.899 }' 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.899 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86551 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86551 ']' 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86551 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86551 00:17:40.160 killing process with pid 86551 00:17:40.160 Received shutdown signal, test time was about 60.000000 seconds 00:17:40.160 00:17:40.160 Latency(us) 00:17:40.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.160 =================================================================================================================== 00:17:40.160 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86551' 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86551 00:17:40.160 [2024-09-29 21:48:58.966117] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:40.160 [2024-09-29 21:48:58.966227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.160 [2024-09-29 21:48:58.966272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.160 [2024-09-29 21:48:58.966282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:40.160 21:48:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86551 00:17:40.419 [2024-09-29 21:48:59.240448] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:41.420 21:49:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:41.420 00:17:41.420 real 0m19.850s 00:17:41.420 user 0m25.831s 00:17:41.420 sys 0m2.767s 00:17:41.421 21:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:41.421 21:49:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.421 ************************************ 00:17:41.421 END TEST raid_rebuild_test_sb_4k 00:17:41.421 ************************************ 00:17:41.680 21:49:00 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:41.680 21:49:00 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:41.680 21:49:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:41.680 21:49:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:41.680 21:49:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.680 ************************************ 00:17:41.680 START TEST raid_state_function_test_sb_md_separate 00:17:41.680 ************************************ 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87236 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:41.680 Process raid pid: 87236 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87236' 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87236 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87236 ']' 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.680 21:49:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.680 [2024-09-29 21:49:00.581080] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:41.680 [2024-09-29 21:49:00.581195] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.940 [2024-09-29 21:49:00.732423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.940 [2024-09-29 21:49:00.920918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.199 [2024-09-29 21:49:01.089961] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.199 [2024-09-29 21:49:01.089999] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.459 [2024-09-29 21:49:01.373782] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.459 [2024-09-29 21:49:01.373837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.459 [2024-09-29 21:49:01.373846] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.459 [2024-09-29 21:49:01.373855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.459 "name": "Existed_Raid", 00:17:42.459 "uuid": "187c5c19-f2b2-49ab-aff9-bee4e6b866d0", 00:17:42.459 "strip_size_kb": 0, 00:17:42.459 "state": "configuring", 00:17:42.459 "raid_level": "raid1", 00:17:42.459 "superblock": true, 00:17:42.459 "num_base_bdevs": 2, 00:17:42.459 "num_base_bdevs_discovered": 0, 00:17:42.459 "num_base_bdevs_operational": 2, 00:17:42.459 "base_bdevs_list": [ 00:17:42.459 { 00:17:42.459 "name": "BaseBdev1", 00:17:42.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.459 "is_configured": false, 00:17:42.459 "data_offset": 0, 00:17:42.459 "data_size": 0 00:17:42.459 }, 00:17:42.459 { 00:17:42.459 "name": "BaseBdev2", 00:17:42.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.459 "is_configured": false, 00:17:42.459 "data_offset": 0, 00:17:42.459 "data_size": 0 00:17:42.459 } 00:17:42.459 ] 00:17:42.459 }' 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.459 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.029 [2024-09-29 21:49:01.793103] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.029 [2024-09-29 21:49:01.793138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.029 [2024-09-29 21:49:01.801124] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:43.029 [2024-09-29 21:49:01.801163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:43.029 [2024-09-29 21:49:01.801171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.029 [2024-09-29 21:49:01.801182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.029 [2024-09-29 21:49:01.859727] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.029 BaseBdev1 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.029 [ 00:17:43.029 { 00:17:43.029 "name": "BaseBdev1", 00:17:43.029 "aliases": [ 00:17:43.029 "c0ab300f-c48b-443d-bb05-097d14545040" 00:17:43.029 ], 00:17:43.029 "product_name": "Malloc disk", 00:17:43.029 "block_size": 4096, 00:17:43.029 "num_blocks": 8192, 00:17:43.029 "uuid": "c0ab300f-c48b-443d-bb05-097d14545040", 00:17:43.029 "md_size": 32, 00:17:43.029 "md_interleave": false, 00:17:43.029 "dif_type": 0, 00:17:43.029 "assigned_rate_limits": { 00:17:43.029 "rw_ios_per_sec": 0, 00:17:43.029 "rw_mbytes_per_sec": 0, 00:17:43.029 "r_mbytes_per_sec": 0, 00:17:43.029 "w_mbytes_per_sec": 0 00:17:43.029 }, 00:17:43.029 "claimed": true, 00:17:43.029 "claim_type": "exclusive_write", 00:17:43.029 "zoned": false, 00:17:43.029 "supported_io_types": { 00:17:43.029 "read": true, 00:17:43.029 "write": true, 00:17:43.029 "unmap": true, 00:17:43.029 "flush": true, 00:17:43.029 "reset": true, 00:17:43.029 "nvme_admin": false, 00:17:43.029 "nvme_io": false, 00:17:43.029 "nvme_io_md": false, 00:17:43.029 "write_zeroes": true, 00:17:43.029 "zcopy": true, 00:17:43.029 "get_zone_info": false, 00:17:43.029 "zone_management": false, 00:17:43.029 "zone_append": false, 00:17:43.029 "compare": false, 00:17:43.029 "compare_and_write": false, 00:17:43.029 "abort": true, 00:17:43.029 "seek_hole": false, 00:17:43.029 "seek_data": false, 00:17:43.029 "copy": true, 00:17:43.029 "nvme_iov_md": false 00:17:43.029 }, 00:17:43.029 "memory_domains": [ 00:17:43.029 { 00:17:43.029 "dma_device_id": "system", 00:17:43.029 "dma_device_type": 1 00:17:43.029 }, 00:17:43.029 { 00:17:43.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.029 "dma_device_type": 2 00:17:43.029 } 00:17:43.029 ], 00:17:43.029 "driver_specific": {} 00:17:43.029 } 00:17:43.029 ] 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.029 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.029 "name": "Existed_Raid", 00:17:43.029 "uuid": "46cebd39-995e-4eb5-b62e-a6b77fb5e20d", 00:17:43.029 "strip_size_kb": 0, 00:17:43.029 "state": "configuring", 00:17:43.029 "raid_level": "raid1", 00:17:43.029 "superblock": true, 00:17:43.029 "num_base_bdevs": 2, 00:17:43.029 "num_base_bdevs_discovered": 1, 00:17:43.029 "num_base_bdevs_operational": 2, 00:17:43.029 "base_bdevs_list": [ 00:17:43.029 { 00:17:43.029 "name": "BaseBdev1", 00:17:43.029 "uuid": "c0ab300f-c48b-443d-bb05-097d14545040", 00:17:43.029 "is_configured": true, 00:17:43.030 "data_offset": 256, 00:17:43.030 "data_size": 7936 00:17:43.030 }, 00:17:43.030 { 00:17:43.030 "name": "BaseBdev2", 00:17:43.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.030 "is_configured": false, 00:17:43.030 "data_offset": 0, 00:17:43.030 "data_size": 0 00:17:43.030 } 00:17:43.030 ] 00:17:43.030 }' 00:17:43.030 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.030 21:49:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.290 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.290 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.290 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.290 [2024-09-29 21:49:02.263107] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.290 [2024-09-29 21:49:02.263146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:43.290 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.290 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:43.290 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.290 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.551 [2024-09-29 21:49:02.275169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.551 [2024-09-29 21:49:02.276776] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.551 [2024-09-29 21:49:02.276818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.551 "name": "Existed_Raid", 00:17:43.551 "uuid": "2cecb32b-971d-451c-b2dc-d2b757b9ce4b", 00:17:43.551 "strip_size_kb": 0, 00:17:43.551 "state": "configuring", 00:17:43.551 "raid_level": "raid1", 00:17:43.551 "superblock": true, 00:17:43.551 "num_base_bdevs": 2, 00:17:43.551 "num_base_bdevs_discovered": 1, 00:17:43.551 "num_base_bdevs_operational": 2, 00:17:43.551 "base_bdevs_list": [ 00:17:43.551 { 00:17:43.551 "name": "BaseBdev1", 00:17:43.551 "uuid": "c0ab300f-c48b-443d-bb05-097d14545040", 00:17:43.551 "is_configured": true, 00:17:43.551 "data_offset": 256, 00:17:43.551 "data_size": 7936 00:17:43.551 }, 00:17:43.551 { 00:17:43.551 "name": "BaseBdev2", 00:17:43.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.551 "is_configured": false, 00:17:43.551 "data_offset": 0, 00:17:43.551 "data_size": 0 00:17:43.551 } 00:17:43.551 ] 00:17:43.551 }' 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.551 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.811 [2024-09-29 21:49:02.727571] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.811 [2024-09-29 21:49:02.727793] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:43.811 [2024-09-29 21:49:02.727806] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:43.811 [2024-09-29 21:49:02.727883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:43.811 [2024-09-29 21:49:02.727988] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:43.811 [2024-09-29 21:49:02.728004] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:43.811 [2024-09-29 21:49:02.728105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.811 BaseBdev2 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.811 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.811 [ 00:17:43.811 { 00:17:43.811 "name": "BaseBdev2", 00:17:43.811 "aliases": [ 00:17:43.811 "8544a8ab-7b7b-4e0b-bf88-527e31f2d0e0" 00:17:43.811 ], 00:17:43.811 "product_name": "Malloc disk", 00:17:43.811 "block_size": 4096, 00:17:43.811 "num_blocks": 8192, 00:17:43.811 "uuid": "8544a8ab-7b7b-4e0b-bf88-527e31f2d0e0", 00:17:43.811 "md_size": 32, 00:17:43.811 "md_interleave": false, 00:17:43.811 "dif_type": 0, 00:17:43.811 "assigned_rate_limits": { 00:17:43.811 "rw_ios_per_sec": 0, 00:17:43.811 "rw_mbytes_per_sec": 0, 00:17:43.811 "r_mbytes_per_sec": 0, 00:17:43.811 "w_mbytes_per_sec": 0 00:17:43.811 }, 00:17:43.811 "claimed": true, 00:17:43.811 "claim_type": "exclusive_write", 00:17:43.811 "zoned": false, 00:17:43.812 "supported_io_types": { 00:17:43.812 "read": true, 00:17:43.812 "write": true, 00:17:43.812 "unmap": true, 00:17:43.812 "flush": true, 00:17:43.812 "reset": true, 00:17:43.812 "nvme_admin": false, 00:17:43.812 "nvme_io": false, 00:17:43.812 "nvme_io_md": false, 00:17:43.812 "write_zeroes": true, 00:17:43.812 "zcopy": true, 00:17:43.812 "get_zone_info": false, 00:17:43.812 "zone_management": false, 00:17:43.812 "zone_append": false, 00:17:43.812 "compare": false, 00:17:43.812 "compare_and_write": false, 00:17:43.812 "abort": true, 00:17:43.812 "seek_hole": false, 00:17:43.812 "seek_data": false, 00:17:43.812 "copy": true, 00:17:43.812 "nvme_iov_md": false 00:17:43.812 }, 00:17:43.812 "memory_domains": [ 00:17:43.812 { 00:17:43.812 "dma_device_id": "system", 00:17:43.812 "dma_device_type": 1 00:17:43.812 }, 00:17:43.812 { 00:17:43.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.812 "dma_device_type": 2 00:17:43.812 } 00:17:43.812 ], 00:17:43.812 "driver_specific": {} 00:17:43.812 } 00:17:43.812 ] 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.812 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.072 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.072 "name": "Existed_Raid", 00:17:44.072 "uuid": "2cecb32b-971d-451c-b2dc-d2b757b9ce4b", 00:17:44.072 "strip_size_kb": 0, 00:17:44.072 "state": "online", 00:17:44.072 "raid_level": "raid1", 00:17:44.072 "superblock": true, 00:17:44.072 "num_base_bdevs": 2, 00:17:44.072 "num_base_bdevs_discovered": 2, 00:17:44.072 "num_base_bdevs_operational": 2, 00:17:44.072 "base_bdevs_list": [ 00:17:44.072 { 00:17:44.072 "name": "BaseBdev1", 00:17:44.072 "uuid": "c0ab300f-c48b-443d-bb05-097d14545040", 00:17:44.072 "is_configured": true, 00:17:44.072 "data_offset": 256, 00:17:44.072 "data_size": 7936 00:17:44.072 }, 00:17:44.072 { 00:17:44.072 "name": "BaseBdev2", 00:17:44.072 "uuid": "8544a8ab-7b7b-4e0b-bf88-527e31f2d0e0", 00:17:44.072 "is_configured": true, 00:17:44.072 "data_offset": 256, 00:17:44.072 "data_size": 7936 00:17:44.072 } 00:17:44.072 ] 00:17:44.072 }' 00:17:44.072 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.072 21:49:02 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:44.332 [2024-09-29 21:49:03.151151] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.332 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:44.332 "name": "Existed_Raid", 00:17:44.332 "aliases": [ 00:17:44.332 "2cecb32b-971d-451c-b2dc-d2b757b9ce4b" 00:17:44.332 ], 00:17:44.332 "product_name": "Raid Volume", 00:17:44.332 "block_size": 4096, 00:17:44.332 "num_blocks": 7936, 00:17:44.332 "uuid": "2cecb32b-971d-451c-b2dc-d2b757b9ce4b", 00:17:44.332 "md_size": 32, 00:17:44.332 "md_interleave": false, 00:17:44.332 "dif_type": 0, 00:17:44.332 "assigned_rate_limits": { 00:17:44.332 "rw_ios_per_sec": 0, 00:17:44.332 "rw_mbytes_per_sec": 0, 00:17:44.332 "r_mbytes_per_sec": 0, 00:17:44.332 "w_mbytes_per_sec": 0 00:17:44.332 }, 00:17:44.332 "claimed": false, 00:17:44.332 "zoned": false, 00:17:44.332 "supported_io_types": { 00:17:44.332 "read": true, 00:17:44.332 "write": true, 00:17:44.332 "unmap": false, 00:17:44.332 "flush": false, 00:17:44.332 "reset": true, 00:17:44.332 "nvme_admin": false, 00:17:44.332 "nvme_io": false, 00:17:44.332 "nvme_io_md": false, 00:17:44.332 "write_zeroes": true, 00:17:44.332 "zcopy": false, 00:17:44.332 "get_zone_info": false, 00:17:44.332 "zone_management": false, 00:17:44.332 "zone_append": false, 00:17:44.332 "compare": false, 00:17:44.332 "compare_and_write": false, 00:17:44.332 "abort": false, 00:17:44.332 "seek_hole": false, 00:17:44.332 "seek_data": false, 00:17:44.332 "copy": false, 00:17:44.332 "nvme_iov_md": false 00:17:44.332 }, 00:17:44.332 "memory_domains": [ 00:17:44.333 { 00:17:44.333 "dma_device_id": "system", 00:17:44.333 "dma_device_type": 1 00:17:44.333 }, 00:17:44.333 { 00:17:44.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.333 "dma_device_type": 2 00:17:44.333 }, 00:17:44.333 { 00:17:44.333 "dma_device_id": "system", 00:17:44.333 "dma_device_type": 1 00:17:44.333 }, 00:17:44.333 { 00:17:44.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.333 "dma_device_type": 2 00:17:44.333 } 00:17:44.333 ], 00:17:44.333 "driver_specific": { 00:17:44.333 "raid": { 00:17:44.333 "uuid": "2cecb32b-971d-451c-b2dc-d2b757b9ce4b", 00:17:44.333 "strip_size_kb": 0, 00:17:44.333 "state": "online", 00:17:44.333 "raid_level": "raid1", 00:17:44.333 "superblock": true, 00:17:44.333 "num_base_bdevs": 2, 00:17:44.333 "num_base_bdevs_discovered": 2, 00:17:44.333 "num_base_bdevs_operational": 2, 00:17:44.333 "base_bdevs_list": [ 00:17:44.333 { 00:17:44.333 "name": "BaseBdev1", 00:17:44.333 "uuid": "c0ab300f-c48b-443d-bb05-097d14545040", 00:17:44.333 "is_configured": true, 00:17:44.333 "data_offset": 256, 00:17:44.333 "data_size": 7936 00:17:44.333 }, 00:17:44.333 { 00:17:44.333 "name": "BaseBdev2", 00:17:44.333 "uuid": "8544a8ab-7b7b-4e0b-bf88-527e31f2d0e0", 00:17:44.333 "is_configured": true, 00:17:44.333 "data_offset": 256, 00:17:44.333 "data_size": 7936 00:17:44.333 } 00:17:44.333 ] 00:17:44.333 } 00:17:44.333 } 00:17:44.333 }' 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:44.333 BaseBdev2' 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.333 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.593 [2024-09-29 21:49:03.382520] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:44.593 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.594 "name": "Existed_Raid", 00:17:44.594 "uuid": "2cecb32b-971d-451c-b2dc-d2b757b9ce4b", 00:17:44.594 "strip_size_kb": 0, 00:17:44.594 "state": "online", 00:17:44.594 "raid_level": "raid1", 00:17:44.594 "superblock": true, 00:17:44.594 "num_base_bdevs": 2, 00:17:44.594 "num_base_bdevs_discovered": 1, 00:17:44.594 "num_base_bdevs_operational": 1, 00:17:44.594 "base_bdevs_list": [ 00:17:44.594 { 00:17:44.594 "name": null, 00:17:44.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.594 "is_configured": false, 00:17:44.594 "data_offset": 0, 00:17:44.594 "data_size": 7936 00:17:44.594 }, 00:17:44.594 { 00:17:44.594 "name": "BaseBdev2", 00:17:44.594 "uuid": "8544a8ab-7b7b-4e0b-bf88-527e31f2d0e0", 00:17:44.594 "is_configured": true, 00:17:44.594 "data_offset": 256, 00:17:44.594 "data_size": 7936 00:17:44.594 } 00:17:44.594 ] 00:17:44.594 }' 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.594 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.164 21:49:03 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.164 [2024-09-29 21:49:03.995871] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:45.164 [2024-09-29 21:49:03.995975] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.164 [2024-09-29 21:49:04.092009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.164 [2024-09-29 21:49:04.092064] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.164 [2024-09-29 21:49:04.092076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87236 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87236 ']' 00:17:45.164 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87236 00:17:45.424 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:45.424 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:45.424 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87236 00:17:45.424 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:45.424 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:45.424 killing process with pid 87236 00:17:45.424 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87236' 00:17:45.424 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87236 00:17:45.424 [2024-09-29 21:49:04.187909] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:45.424 21:49:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87236 00:17:45.424 [2024-09-29 21:49:04.203546] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.807 21:49:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:46.807 00:17:46.807 real 0m4.894s 00:17:46.807 user 0m6.878s 00:17:46.807 sys 0m0.884s 00:17:46.807 21:49:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:46.807 21:49:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.807 ************************************ 00:17:46.807 END TEST raid_state_function_test_sb_md_separate 00:17:46.807 ************************************ 00:17:46.807 21:49:05 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:46.807 21:49:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:46.807 21:49:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:46.807 21:49:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.807 ************************************ 00:17:46.807 START TEST raid_superblock_test_md_separate 00:17:46.807 ************************************ 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87489 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87489 00:17:46.807 21:49:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87489 ']' 00:17:46.808 21:49:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.808 21:49:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:46.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.808 21:49:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.808 21:49:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:46.808 21:49:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.808 [2024-09-29 21:49:05.559739] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:46.808 [2024-09-29 21:49:05.559894] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87489 ] 00:17:46.808 [2024-09-29 21:49:05.730253] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.068 [2024-09-29 21:49:05.920912] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.328 [2024-09-29 21:49:06.107747] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.328 [2024-09-29 21:49:06.107799] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.589 malloc1 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.589 [2024-09-29 21:49:06.427721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:47.589 [2024-09-29 21:49:06.427782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.589 [2024-09-29 21:49:06.427824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:47.589 [2024-09-29 21:49:06.427833] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.589 [2024-09-29 21:49:06.429657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.589 [2024-09-29 21:49:06.429692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:47.589 pt1 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.589 malloc2 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.589 [2024-09-29 21:49:06.501127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:47.589 [2024-09-29 21:49:06.501181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.589 [2024-09-29 21:49:06.501202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:47.589 [2024-09-29 21:49:06.501210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.589 [2024-09-29 21:49:06.502877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.589 [2024-09-29 21:49:06.502910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:47.589 pt2 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.589 [2024-09-29 21:49:06.513188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:47.589 [2024-09-29 21:49:06.514787] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:47.589 [2024-09-29 21:49:06.514951] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:47.589 [2024-09-29 21:49:06.514964] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:47.589 [2024-09-29 21:49:06.515049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:47.589 [2024-09-29 21:49:06.515167] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:47.589 [2024-09-29 21:49:06.515185] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:47.589 [2024-09-29 21:49:06.515283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.589 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.590 "name": "raid_bdev1", 00:17:47.590 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:47.590 "strip_size_kb": 0, 00:17:47.590 "state": "online", 00:17:47.590 "raid_level": "raid1", 00:17:47.590 "superblock": true, 00:17:47.590 "num_base_bdevs": 2, 00:17:47.590 "num_base_bdevs_discovered": 2, 00:17:47.590 "num_base_bdevs_operational": 2, 00:17:47.590 "base_bdevs_list": [ 00:17:47.590 { 00:17:47.590 "name": "pt1", 00:17:47.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.590 "is_configured": true, 00:17:47.590 "data_offset": 256, 00:17:47.590 "data_size": 7936 00:17:47.590 }, 00:17:47.590 { 00:17:47.590 "name": "pt2", 00:17:47.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.590 "is_configured": true, 00:17:47.590 "data_offset": 256, 00:17:47.590 "data_size": 7936 00:17:47.590 } 00:17:47.590 ] 00:17:47.590 }' 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.590 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:48.160 21:49:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.160 [2024-09-29 21:49:07.000500] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.160 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.160 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:48.160 "name": "raid_bdev1", 00:17:48.160 "aliases": [ 00:17:48.160 "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584" 00:17:48.160 ], 00:17:48.160 "product_name": "Raid Volume", 00:17:48.160 "block_size": 4096, 00:17:48.160 "num_blocks": 7936, 00:17:48.160 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:48.160 "md_size": 32, 00:17:48.160 "md_interleave": false, 00:17:48.160 "dif_type": 0, 00:17:48.160 "assigned_rate_limits": { 00:17:48.160 "rw_ios_per_sec": 0, 00:17:48.160 "rw_mbytes_per_sec": 0, 00:17:48.160 "r_mbytes_per_sec": 0, 00:17:48.160 "w_mbytes_per_sec": 0 00:17:48.160 }, 00:17:48.160 "claimed": false, 00:17:48.160 "zoned": false, 00:17:48.160 "supported_io_types": { 00:17:48.160 "read": true, 00:17:48.160 "write": true, 00:17:48.160 "unmap": false, 00:17:48.160 "flush": false, 00:17:48.160 "reset": true, 00:17:48.160 "nvme_admin": false, 00:17:48.160 "nvme_io": false, 00:17:48.160 "nvme_io_md": false, 00:17:48.160 "write_zeroes": true, 00:17:48.160 "zcopy": false, 00:17:48.160 "get_zone_info": false, 00:17:48.160 "zone_management": false, 00:17:48.160 "zone_append": false, 00:17:48.160 "compare": false, 00:17:48.160 "compare_and_write": false, 00:17:48.160 "abort": false, 00:17:48.160 "seek_hole": false, 00:17:48.160 "seek_data": false, 00:17:48.160 "copy": false, 00:17:48.160 "nvme_iov_md": false 00:17:48.160 }, 00:17:48.160 "memory_domains": [ 00:17:48.160 { 00:17:48.160 "dma_device_id": "system", 00:17:48.161 "dma_device_type": 1 00:17:48.161 }, 00:17:48.161 { 00:17:48.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.161 "dma_device_type": 2 00:17:48.161 }, 00:17:48.161 { 00:17:48.161 "dma_device_id": "system", 00:17:48.161 "dma_device_type": 1 00:17:48.161 }, 00:17:48.161 { 00:17:48.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.161 "dma_device_type": 2 00:17:48.161 } 00:17:48.161 ], 00:17:48.161 "driver_specific": { 00:17:48.161 "raid": { 00:17:48.161 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:48.161 "strip_size_kb": 0, 00:17:48.161 "state": "online", 00:17:48.161 "raid_level": "raid1", 00:17:48.161 "superblock": true, 00:17:48.161 "num_base_bdevs": 2, 00:17:48.161 "num_base_bdevs_discovered": 2, 00:17:48.161 "num_base_bdevs_operational": 2, 00:17:48.161 "base_bdevs_list": [ 00:17:48.161 { 00:17:48.161 "name": "pt1", 00:17:48.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.161 "is_configured": true, 00:17:48.161 "data_offset": 256, 00:17:48.161 "data_size": 7936 00:17:48.161 }, 00:17:48.161 { 00:17:48.161 "name": "pt2", 00:17:48.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.161 "is_configured": true, 00:17:48.161 "data_offset": 256, 00:17:48.161 "data_size": 7936 00:17:48.161 } 00:17:48.161 ] 00:17:48.161 } 00:17:48.161 } 00:17:48.161 }' 00:17:48.161 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:48.161 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:48.161 pt2' 00:17:48.161 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.161 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:48.161 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.161 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:48.161 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.161 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.161 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.421 [2024-09-29 21:49:07.224119] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584 ']' 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.421 [2024-09-29 21:49:07.255822] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.421 [2024-09-29 21:49:07.255843] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.421 [2024-09-29 21:49:07.255904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.421 [2024-09-29 21:49:07.255951] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.421 [2024-09-29 21:49:07.255961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.421 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.422 [2024-09-29 21:49:07.383628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:48.422 [2024-09-29 21:49:07.385335] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:48.422 [2024-09-29 21:49:07.385407] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:48.422 [2024-09-29 21:49:07.385453] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:48.422 [2024-09-29 21:49:07.385467] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.422 [2024-09-29 21:49:07.385476] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:48.422 request: 00:17:48.422 { 00:17:48.422 "name": "raid_bdev1", 00:17:48.422 "raid_level": "raid1", 00:17:48.422 "base_bdevs": [ 00:17:48.422 "malloc1", 00:17:48.422 "malloc2" 00:17:48.422 ], 00:17:48.422 "superblock": false, 00:17:48.422 "method": "bdev_raid_create", 00:17:48.422 "req_id": 1 00:17:48.422 } 00:17:48.422 Got JSON-RPC error response 00:17:48.422 response: 00:17:48.422 { 00:17:48.422 "code": -17, 00:17:48.422 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:48.422 } 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.422 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.683 [2024-09-29 21:49:07.447492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:48.683 [2024-09-29 21:49:07.447537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.683 [2024-09-29 21:49:07.447565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:48.683 [2024-09-29 21:49:07.447576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.683 [2024-09-29 21:49:07.449377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.683 [2024-09-29 21:49:07.449414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:48.683 [2024-09-29 21:49:07.449450] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:48.683 [2024-09-29 21:49:07.449503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:48.683 pt1 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.683 "name": "raid_bdev1", 00:17:48.683 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:48.683 "strip_size_kb": 0, 00:17:48.683 "state": "configuring", 00:17:48.683 "raid_level": "raid1", 00:17:48.683 "superblock": true, 00:17:48.683 "num_base_bdevs": 2, 00:17:48.683 "num_base_bdevs_discovered": 1, 00:17:48.683 "num_base_bdevs_operational": 2, 00:17:48.683 "base_bdevs_list": [ 00:17:48.683 { 00:17:48.683 "name": "pt1", 00:17:48.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:48.683 "is_configured": true, 00:17:48.683 "data_offset": 256, 00:17:48.683 "data_size": 7936 00:17:48.683 }, 00:17:48.683 { 00:17:48.683 "name": null, 00:17:48.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.683 "is_configured": false, 00:17:48.683 "data_offset": 256, 00:17:48.683 "data_size": 7936 00:17:48.683 } 00:17:48.683 ] 00:17:48.683 }' 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.683 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.944 [2024-09-29 21:49:07.894801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:48.944 [2024-09-29 21:49:07.894879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:48.944 [2024-09-29 21:49:07.894898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:48.944 [2024-09-29 21:49:07.894908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:48.944 [2024-09-29 21:49:07.895102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:48.944 [2024-09-29 21:49:07.895123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:48.944 [2024-09-29 21:49:07.895163] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:48.944 [2024-09-29 21:49:07.895182] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:48.944 [2024-09-29 21:49:07.895292] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:48.944 [2024-09-29 21:49:07.895308] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:48.944 [2024-09-29 21:49:07.895373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:48.944 [2024-09-29 21:49:07.895472] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:48.944 [2024-09-29 21:49:07.895484] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:48.944 [2024-09-29 21:49:07.895572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.944 pt2 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.944 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.204 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.205 "name": "raid_bdev1", 00:17:49.205 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:49.205 "strip_size_kb": 0, 00:17:49.205 "state": "online", 00:17:49.205 "raid_level": "raid1", 00:17:49.205 "superblock": true, 00:17:49.205 "num_base_bdevs": 2, 00:17:49.205 "num_base_bdevs_discovered": 2, 00:17:49.205 "num_base_bdevs_operational": 2, 00:17:49.205 "base_bdevs_list": [ 00:17:49.205 { 00:17:49.205 "name": "pt1", 00:17:49.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.205 "is_configured": true, 00:17:49.205 "data_offset": 256, 00:17:49.205 "data_size": 7936 00:17:49.205 }, 00:17:49.205 { 00:17:49.205 "name": "pt2", 00:17:49.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.205 "is_configured": true, 00:17:49.205 "data_offset": 256, 00:17:49.205 "data_size": 7936 00:17:49.205 } 00:17:49.205 ] 00:17:49.205 }' 00:17:49.205 21:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.205 21:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.464 [2024-09-29 21:49:08.370217] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.464 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:49.464 "name": "raid_bdev1", 00:17:49.464 "aliases": [ 00:17:49.464 "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584" 00:17:49.464 ], 00:17:49.464 "product_name": "Raid Volume", 00:17:49.464 "block_size": 4096, 00:17:49.464 "num_blocks": 7936, 00:17:49.464 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:49.464 "md_size": 32, 00:17:49.464 "md_interleave": false, 00:17:49.464 "dif_type": 0, 00:17:49.464 "assigned_rate_limits": { 00:17:49.464 "rw_ios_per_sec": 0, 00:17:49.464 "rw_mbytes_per_sec": 0, 00:17:49.464 "r_mbytes_per_sec": 0, 00:17:49.464 "w_mbytes_per_sec": 0 00:17:49.464 }, 00:17:49.464 "claimed": false, 00:17:49.464 "zoned": false, 00:17:49.464 "supported_io_types": { 00:17:49.464 "read": true, 00:17:49.464 "write": true, 00:17:49.464 "unmap": false, 00:17:49.464 "flush": false, 00:17:49.464 "reset": true, 00:17:49.464 "nvme_admin": false, 00:17:49.464 "nvme_io": false, 00:17:49.464 "nvme_io_md": false, 00:17:49.465 "write_zeroes": true, 00:17:49.465 "zcopy": false, 00:17:49.465 "get_zone_info": false, 00:17:49.465 "zone_management": false, 00:17:49.465 "zone_append": false, 00:17:49.465 "compare": false, 00:17:49.465 "compare_and_write": false, 00:17:49.465 "abort": false, 00:17:49.465 "seek_hole": false, 00:17:49.465 "seek_data": false, 00:17:49.465 "copy": false, 00:17:49.465 "nvme_iov_md": false 00:17:49.465 }, 00:17:49.465 "memory_domains": [ 00:17:49.465 { 00:17:49.465 "dma_device_id": "system", 00:17:49.465 "dma_device_type": 1 00:17:49.465 }, 00:17:49.465 { 00:17:49.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.465 "dma_device_type": 2 00:17:49.465 }, 00:17:49.465 { 00:17:49.465 "dma_device_id": "system", 00:17:49.465 "dma_device_type": 1 00:17:49.465 }, 00:17:49.465 { 00:17:49.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.465 "dma_device_type": 2 00:17:49.465 } 00:17:49.465 ], 00:17:49.465 "driver_specific": { 00:17:49.465 "raid": { 00:17:49.465 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:49.465 "strip_size_kb": 0, 00:17:49.465 "state": "online", 00:17:49.465 "raid_level": "raid1", 00:17:49.465 "superblock": true, 00:17:49.465 "num_base_bdevs": 2, 00:17:49.465 "num_base_bdevs_discovered": 2, 00:17:49.465 "num_base_bdevs_operational": 2, 00:17:49.465 "base_bdevs_list": [ 00:17:49.465 { 00:17:49.465 "name": "pt1", 00:17:49.465 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:49.465 "is_configured": true, 00:17:49.465 "data_offset": 256, 00:17:49.465 "data_size": 7936 00:17:49.465 }, 00:17:49.465 { 00:17:49.465 "name": "pt2", 00:17:49.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.465 "is_configured": true, 00:17:49.465 "data_offset": 256, 00:17:49.465 "data_size": 7936 00:17:49.465 } 00:17:49.465 ] 00:17:49.465 } 00:17:49.465 } 00:17:49.465 }' 00:17:49.465 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:49.725 pt2' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.725 [2024-09-29 21:49:08.593811] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584 '!=' 7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584 ']' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.725 [2024-09-29 21:49:08.625593] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.725 "name": "raid_bdev1", 00:17:49.725 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:49.725 "strip_size_kb": 0, 00:17:49.725 "state": "online", 00:17:49.725 "raid_level": "raid1", 00:17:49.725 "superblock": true, 00:17:49.725 "num_base_bdevs": 2, 00:17:49.725 "num_base_bdevs_discovered": 1, 00:17:49.725 "num_base_bdevs_operational": 1, 00:17:49.725 "base_bdevs_list": [ 00:17:49.725 { 00:17:49.725 "name": null, 00:17:49.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.725 "is_configured": false, 00:17:49.725 "data_offset": 0, 00:17:49.725 "data_size": 7936 00:17:49.725 }, 00:17:49.725 { 00:17:49.725 "name": "pt2", 00:17:49.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.725 "is_configured": true, 00:17:49.725 "data_offset": 256, 00:17:49.725 "data_size": 7936 00:17:49.725 } 00:17:49.725 ] 00:17:49.725 }' 00:17:49.725 21:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.726 21:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.295 [2024-09-29 21:49:09.044861] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.295 [2024-09-29 21:49:09.044887] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.295 [2024-09-29 21:49:09.044946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.295 [2024-09-29 21:49:09.044987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.295 [2024-09-29 21:49:09.044997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.295 [2024-09-29 21:49:09.120724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:50.295 [2024-09-29 21:49:09.120776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.295 [2024-09-29 21:49:09.120807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:50.295 [2024-09-29 21:49:09.120817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.295 [2024-09-29 21:49:09.122657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.295 [2024-09-29 21:49:09.122693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:50.295 [2024-09-29 21:49:09.122734] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:50.295 [2024-09-29 21:49:09.122781] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.295 [2024-09-29 21:49:09.122869] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:50.295 [2024-09-29 21:49:09.122880] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.295 [2024-09-29 21:49:09.122946] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:50.295 [2024-09-29 21:49:09.123055] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:50.295 [2024-09-29 21:49:09.123069] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:50.295 [2024-09-29 21:49:09.123156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.295 pt2 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.295 "name": "raid_bdev1", 00:17:50.295 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:50.295 "strip_size_kb": 0, 00:17:50.295 "state": "online", 00:17:50.295 "raid_level": "raid1", 00:17:50.295 "superblock": true, 00:17:50.295 "num_base_bdevs": 2, 00:17:50.295 "num_base_bdevs_discovered": 1, 00:17:50.295 "num_base_bdevs_operational": 1, 00:17:50.295 "base_bdevs_list": [ 00:17:50.295 { 00:17:50.295 "name": null, 00:17:50.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.295 "is_configured": false, 00:17:50.295 "data_offset": 256, 00:17:50.295 "data_size": 7936 00:17:50.295 }, 00:17:50.295 { 00:17:50.295 "name": "pt2", 00:17:50.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.295 "is_configured": true, 00:17:50.295 "data_offset": 256, 00:17:50.295 "data_size": 7936 00:17:50.295 } 00:17:50.295 ] 00:17:50.295 }' 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.295 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.862 [2024-09-29 21:49:09.587929] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.862 [2024-09-29 21:49:09.587954] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.862 [2024-09-29 21:49:09.587996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.862 [2024-09-29 21:49:09.588044] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.862 [2024-09-29 21:49:09.588052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.862 [2024-09-29 21:49:09.647862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.862 [2024-09-29 21:49:09.647904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.862 [2024-09-29 21:49:09.647919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:50.862 [2024-09-29 21:49:09.647927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.862 [2024-09-29 21:49:09.649755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.862 [2024-09-29 21:49:09.649787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.862 [2024-09-29 21:49:09.649828] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:50.862 [2024-09-29 21:49:09.649869] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.862 [2024-09-29 21:49:09.649973] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:50.862 [2024-09-29 21:49:09.649982] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.862 [2024-09-29 21:49:09.649998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:50.862 [2024-09-29 21:49:09.650070] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.862 [2024-09-29 21:49:09.650130] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:50.862 [2024-09-29 21:49:09.650137] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:50.862 [2024-09-29 21:49:09.650197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:50.862 [2024-09-29 21:49:09.650303] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:50.862 [2024-09-29 21:49:09.650312] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:50.862 [2024-09-29 21:49:09.650401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.862 pt1 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.862 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.862 "name": "raid_bdev1", 00:17:50.862 "uuid": "7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584", 00:17:50.862 "strip_size_kb": 0, 00:17:50.862 "state": "online", 00:17:50.862 "raid_level": "raid1", 00:17:50.862 "superblock": true, 00:17:50.862 "num_base_bdevs": 2, 00:17:50.862 "num_base_bdevs_discovered": 1, 00:17:50.862 "num_base_bdevs_operational": 1, 00:17:50.862 "base_bdevs_list": [ 00:17:50.862 { 00:17:50.862 "name": null, 00:17:50.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.862 "is_configured": false, 00:17:50.862 "data_offset": 256, 00:17:50.862 "data_size": 7936 00:17:50.862 }, 00:17:50.862 { 00:17:50.862 "name": "pt2", 00:17:50.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.863 "is_configured": true, 00:17:50.863 "data_offset": 256, 00:17:50.863 "data_size": 7936 00:17:50.863 } 00:17:50.863 ] 00:17:50.863 }' 00:17:50.863 21:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.863 21:49:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.122 21:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:51.122 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.122 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.122 21:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:51.122 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:51.382 [2024-09-29 21:49:10.131229] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584 '!=' 7e3a3f4c-dbb2-4f4c-adf5-be88ad53a584 ']' 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87489 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87489 ']' 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87489 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87489 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.382 killing process with pid 87489 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87489' 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87489 00:17:51.382 [2024-09-29 21:49:10.215862] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.382 [2024-09-29 21:49:10.215936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.382 [2024-09-29 21:49:10.215971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.382 [2024-09-29 21:49:10.215983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:51.382 21:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87489 00:17:51.642 [2024-09-29 21:49:10.415422] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:53.022 21:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:53.022 00:17:53.022 real 0m6.127s 00:17:53.022 user 0m9.145s 00:17:53.022 sys 0m1.183s 00:17:53.022 21:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:53.022 21:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.022 ************************************ 00:17:53.022 END TEST raid_superblock_test_md_separate 00:17:53.022 ************************************ 00:17:53.022 21:49:11 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:53.022 21:49:11 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:53.022 21:49:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:53.022 21:49:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:53.022 21:49:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:53.022 ************************************ 00:17:53.022 START TEST raid_rebuild_test_sb_md_separate 00:17:53.022 ************************************ 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87812 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87812 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87812 ']' 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.022 21:49:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.022 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:53.022 Zero copy mechanism will not be used. 00:17:53.022 [2024-09-29 21:49:11.772444] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:53.022 [2024-09-29 21:49:11.772545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87812 ] 00:17:53.022 [2024-09-29 21:49:11.936880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.282 [2024-09-29 21:49:12.132124] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.541 [2024-09-29 21:49:12.309765] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.541 [2024-09-29 21:49:12.309820] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.800 BaseBdev1_malloc 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.800 [2024-09-29 21:49:12.633741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:53.800 [2024-09-29 21:49:12.633798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.800 [2024-09-29 21:49:12.633820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:53.800 [2024-09-29 21:49:12.633830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.800 [2024-09-29 21:49:12.635539] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.800 [2024-09-29 21:49:12.635577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.800 BaseBdev1 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.800 BaseBdev2_malloc 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:53.800 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.801 [2024-09-29 21:49:12.697774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:53.801 [2024-09-29 21:49:12.697826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.801 [2024-09-29 21:49:12.697844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:53.801 [2024-09-29 21:49:12.697854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.801 [2024-09-29 21:49:12.699513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.801 [2024-09-29 21:49:12.699550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:53.801 BaseBdev2 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.801 spare_malloc 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.801 spare_delay 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.801 [2024-09-29 21:49:12.764336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:53.801 [2024-09-29 21:49:12.764388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.801 [2024-09-29 21:49:12.764406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:53.801 [2024-09-29 21:49:12.764415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.801 [2024-09-29 21:49:12.766098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.801 [2024-09-29 21:49:12.766140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:53.801 spare 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:53.801 [2024-09-29 21:49:12.776370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.801 [2024-09-29 21:49:12.777987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.801 [2024-09-29 21:49:12.778154] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:53.801 [2024-09-29 21:49:12.778169] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:53.801 [2024-09-29 21:49:12.778229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:53.801 [2024-09-29 21:49:12.778341] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:53.801 [2024-09-29 21:49:12.778356] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:53.801 [2024-09-29 21:49:12.778444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.801 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.061 "name": "raid_bdev1", 00:17:54.061 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:17:54.061 "strip_size_kb": 0, 00:17:54.061 "state": "online", 00:17:54.061 "raid_level": "raid1", 00:17:54.061 "superblock": true, 00:17:54.061 "num_base_bdevs": 2, 00:17:54.061 "num_base_bdevs_discovered": 2, 00:17:54.061 "num_base_bdevs_operational": 2, 00:17:54.061 "base_bdevs_list": [ 00:17:54.061 { 00:17:54.061 "name": "BaseBdev1", 00:17:54.061 "uuid": "1a1e7b19-c369-56d7-aba8-027a9256e232", 00:17:54.061 "is_configured": true, 00:17:54.061 "data_offset": 256, 00:17:54.061 "data_size": 7936 00:17:54.061 }, 00:17:54.061 { 00:17:54.061 "name": "BaseBdev2", 00:17:54.061 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:17:54.061 "is_configured": true, 00:17:54.061 "data_offset": 256, 00:17:54.061 "data_size": 7936 00:17:54.061 } 00:17:54.061 ] 00:17:54.061 }' 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.061 21:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.320 [2024-09-29 21:49:13.247854] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.320 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:54.579 [2024-09-29 21:49:13.511191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:54.579 /dev/nbd0 00:17:54.579 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:54.846 1+0 records in 00:17:54.846 1+0 records out 00:17:54.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365834 s, 11.2 MB/s 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:54.846 21:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:55.441 7936+0 records in 00:17:55.441 7936+0 records out 00:17:55.441 32505856 bytes (33 MB, 31 MiB) copied, 0.548965 s, 59.2 MB/s 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:55.441 [2024-09-29 21:49:14.348991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:55.441 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.442 [2024-09-29 21:49:14.361752] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.442 "name": "raid_bdev1", 00:17:55.442 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:17:55.442 "strip_size_kb": 0, 00:17:55.442 "state": "online", 00:17:55.442 "raid_level": "raid1", 00:17:55.442 "superblock": true, 00:17:55.442 "num_base_bdevs": 2, 00:17:55.442 "num_base_bdevs_discovered": 1, 00:17:55.442 "num_base_bdevs_operational": 1, 00:17:55.442 "base_bdevs_list": [ 00:17:55.442 { 00:17:55.442 "name": null, 00:17:55.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.442 "is_configured": false, 00:17:55.442 "data_offset": 0, 00:17:55.442 "data_size": 7936 00:17:55.442 }, 00:17:55.442 { 00:17:55.442 "name": "BaseBdev2", 00:17:55.442 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:17:55.442 "is_configured": true, 00:17:55.442 "data_offset": 256, 00:17:55.442 "data_size": 7936 00:17:55.442 } 00:17:55.442 ] 00:17:55.442 }' 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.442 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.020 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.020 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.020 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.020 [2024-09-29 21:49:14.856925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.020 [2024-09-29 21:49:14.871735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:56.020 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.020 21:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:56.020 [2024-09-29 21:49:14.873495] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.959 "name": "raid_bdev1", 00:17:56.959 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:17:56.959 "strip_size_kb": 0, 00:17:56.959 "state": "online", 00:17:56.959 "raid_level": "raid1", 00:17:56.959 "superblock": true, 00:17:56.959 "num_base_bdevs": 2, 00:17:56.959 "num_base_bdevs_discovered": 2, 00:17:56.959 "num_base_bdevs_operational": 2, 00:17:56.959 "process": { 00:17:56.959 "type": "rebuild", 00:17:56.959 "target": "spare", 00:17:56.959 "progress": { 00:17:56.959 "blocks": 2560, 00:17:56.959 "percent": 32 00:17:56.959 } 00:17:56.959 }, 00:17:56.959 "base_bdevs_list": [ 00:17:56.959 { 00:17:56.959 "name": "spare", 00:17:56.959 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:17:56.959 "is_configured": true, 00:17:56.959 "data_offset": 256, 00:17:56.959 "data_size": 7936 00:17:56.959 }, 00:17:56.959 { 00:17:56.959 "name": "BaseBdev2", 00:17:56.959 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:17:56.959 "is_configured": true, 00:17:56.959 "data_offset": 256, 00:17:56.959 "data_size": 7936 00:17:56.959 } 00:17:56.959 ] 00:17:56.959 }' 00:17:56.959 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.218 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:57.218 21:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.218 [2024-09-29 21:49:16.013198] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.218 [2024-09-29 21:49:16.078119] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:57.218 [2024-09-29 21:49:16.078172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.218 [2024-09-29 21:49:16.078185] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:57.218 [2024-09-29 21:49:16.078199] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.218 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.219 "name": "raid_bdev1", 00:17:57.219 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:17:57.219 "strip_size_kb": 0, 00:17:57.219 "state": "online", 00:17:57.219 "raid_level": "raid1", 00:17:57.219 "superblock": true, 00:17:57.219 "num_base_bdevs": 2, 00:17:57.219 "num_base_bdevs_discovered": 1, 00:17:57.219 "num_base_bdevs_operational": 1, 00:17:57.219 "base_bdevs_list": [ 00:17:57.219 { 00:17:57.219 "name": null, 00:17:57.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.219 "is_configured": false, 00:17:57.219 "data_offset": 0, 00:17:57.219 "data_size": 7936 00:17:57.219 }, 00:17:57.219 { 00:17:57.219 "name": "BaseBdev2", 00:17:57.219 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:17:57.219 "is_configured": true, 00:17:57.219 "data_offset": 256, 00:17:57.219 "data_size": 7936 00:17:57.219 } 00:17:57.219 ] 00:17:57.219 }' 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.219 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:57.788 "name": "raid_bdev1", 00:17:57.788 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:17:57.788 "strip_size_kb": 0, 00:17:57.788 "state": "online", 00:17:57.788 "raid_level": "raid1", 00:17:57.788 "superblock": true, 00:17:57.788 "num_base_bdevs": 2, 00:17:57.788 "num_base_bdevs_discovered": 1, 00:17:57.788 "num_base_bdevs_operational": 1, 00:17:57.788 "base_bdevs_list": [ 00:17:57.788 { 00:17:57.788 "name": null, 00:17:57.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.788 "is_configured": false, 00:17:57.788 "data_offset": 0, 00:17:57.788 "data_size": 7936 00:17:57.788 }, 00:17:57.788 { 00:17:57.788 "name": "BaseBdev2", 00:17:57.788 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:17:57.788 "is_configured": true, 00:17:57.788 "data_offset": 256, 00:17:57.788 "data_size": 7936 00:17:57.788 } 00:17:57.788 ] 00:17:57.788 }' 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.788 [2024-09-29 21:49:16.679713] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.788 [2024-09-29 21:49:16.693214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.788 [2024-09-29 21:49:16.694765] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:57.788 21:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:58.726 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.726 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.726 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.726 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.726 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.726 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.726 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.726 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.726 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.986 "name": "raid_bdev1", 00:17:58.986 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:17:58.986 "strip_size_kb": 0, 00:17:58.986 "state": "online", 00:17:58.986 "raid_level": "raid1", 00:17:58.986 "superblock": true, 00:17:58.986 "num_base_bdevs": 2, 00:17:58.986 "num_base_bdevs_discovered": 2, 00:17:58.986 "num_base_bdevs_operational": 2, 00:17:58.986 "process": { 00:17:58.986 "type": "rebuild", 00:17:58.986 "target": "spare", 00:17:58.986 "progress": { 00:17:58.986 "blocks": 2560, 00:17:58.986 "percent": 32 00:17:58.986 } 00:17:58.986 }, 00:17:58.986 "base_bdevs_list": [ 00:17:58.986 { 00:17:58.986 "name": "spare", 00:17:58.986 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:17:58.986 "is_configured": true, 00:17:58.986 "data_offset": 256, 00:17:58.986 "data_size": 7936 00:17:58.986 }, 00:17:58.986 { 00:17:58.986 "name": "BaseBdev2", 00:17:58.986 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:17:58.986 "is_configured": true, 00:17:58.986 "data_offset": 256, 00:17:58.986 "data_size": 7936 00:17:58.986 } 00:17:58.986 ] 00:17:58.986 }' 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:58.986 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=713 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.986 "name": "raid_bdev1", 00:17:58.986 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:17:58.986 "strip_size_kb": 0, 00:17:58.986 "state": "online", 00:17:58.986 "raid_level": "raid1", 00:17:58.986 "superblock": true, 00:17:58.986 "num_base_bdevs": 2, 00:17:58.986 "num_base_bdevs_discovered": 2, 00:17:58.986 "num_base_bdevs_operational": 2, 00:17:58.986 "process": { 00:17:58.986 "type": "rebuild", 00:17:58.986 "target": "spare", 00:17:58.986 "progress": { 00:17:58.986 "blocks": 2816, 00:17:58.986 "percent": 35 00:17:58.986 } 00:17:58.986 }, 00:17:58.986 "base_bdevs_list": [ 00:17:58.986 { 00:17:58.986 "name": "spare", 00:17:58.986 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:17:58.986 "is_configured": true, 00:17:58.986 "data_offset": 256, 00:17:58.986 "data_size": 7936 00:17:58.986 }, 00:17:58.986 { 00:17:58.986 "name": "BaseBdev2", 00:17:58.986 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:17:58.986 "is_configured": true, 00:17:58.986 "data_offset": 256, 00:17:58.986 "data_size": 7936 00:17:58.986 } 00:17:58.986 ] 00:17:58.986 }' 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.986 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.245 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.245 21:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.185 21:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.185 21:49:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.185 21:49:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.185 "name": "raid_bdev1", 00:18:00.185 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:00.185 "strip_size_kb": 0, 00:18:00.185 "state": "online", 00:18:00.185 "raid_level": "raid1", 00:18:00.185 "superblock": true, 00:18:00.185 "num_base_bdevs": 2, 00:18:00.185 "num_base_bdevs_discovered": 2, 00:18:00.185 "num_base_bdevs_operational": 2, 00:18:00.185 "process": { 00:18:00.185 "type": "rebuild", 00:18:00.185 "target": "spare", 00:18:00.185 "progress": { 00:18:00.185 "blocks": 5632, 00:18:00.185 "percent": 70 00:18:00.185 } 00:18:00.185 }, 00:18:00.185 "base_bdevs_list": [ 00:18:00.185 { 00:18:00.185 "name": "spare", 00:18:00.185 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:18:00.185 "is_configured": true, 00:18:00.185 "data_offset": 256, 00:18:00.185 "data_size": 7936 00:18:00.185 }, 00:18:00.185 { 00:18:00.185 "name": "BaseBdev2", 00:18:00.185 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:00.185 "is_configured": true, 00:18:00.185 "data_offset": 256, 00:18:00.185 "data_size": 7936 00:18:00.185 } 00:18:00.185 ] 00:18:00.185 }' 00:18:00.185 21:49:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.185 21:49:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.185 21:49:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.186 21:49:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.186 21:49:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.126 [2024-09-29 21:49:19.806169] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:01.126 [2024-09-29 21:49:19.806230] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:01.126 [2024-09-29 21:49:19.806319] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.386 "name": "raid_bdev1", 00:18:01.386 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:01.386 "strip_size_kb": 0, 00:18:01.386 "state": "online", 00:18:01.386 "raid_level": "raid1", 00:18:01.386 "superblock": true, 00:18:01.386 "num_base_bdevs": 2, 00:18:01.386 "num_base_bdevs_discovered": 2, 00:18:01.386 "num_base_bdevs_operational": 2, 00:18:01.386 "base_bdevs_list": [ 00:18:01.386 { 00:18:01.386 "name": "spare", 00:18:01.386 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:18:01.386 "is_configured": true, 00:18:01.386 "data_offset": 256, 00:18:01.386 "data_size": 7936 00:18:01.386 }, 00:18:01.386 { 00:18:01.386 "name": "BaseBdev2", 00:18:01.386 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:01.386 "is_configured": true, 00:18:01.386 "data_offset": 256, 00:18:01.386 "data_size": 7936 00:18:01.386 } 00:18:01.386 ] 00:18:01.386 }' 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.386 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.386 "name": "raid_bdev1", 00:18:01.386 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:01.386 "strip_size_kb": 0, 00:18:01.386 "state": "online", 00:18:01.386 "raid_level": "raid1", 00:18:01.386 "superblock": true, 00:18:01.386 "num_base_bdevs": 2, 00:18:01.386 "num_base_bdevs_discovered": 2, 00:18:01.386 "num_base_bdevs_operational": 2, 00:18:01.386 "base_bdevs_list": [ 00:18:01.386 { 00:18:01.386 "name": "spare", 00:18:01.386 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:18:01.386 "is_configured": true, 00:18:01.386 "data_offset": 256, 00:18:01.387 "data_size": 7936 00:18:01.387 }, 00:18:01.387 { 00:18:01.387 "name": "BaseBdev2", 00:18:01.387 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:01.387 "is_configured": true, 00:18:01.387 "data_offset": 256, 00:18:01.387 "data_size": 7936 00:18:01.387 } 00:18:01.387 ] 00:18:01.387 }' 00:18:01.387 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.387 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.387 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.647 "name": "raid_bdev1", 00:18:01.647 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:01.647 "strip_size_kb": 0, 00:18:01.647 "state": "online", 00:18:01.647 "raid_level": "raid1", 00:18:01.647 "superblock": true, 00:18:01.647 "num_base_bdevs": 2, 00:18:01.647 "num_base_bdevs_discovered": 2, 00:18:01.647 "num_base_bdevs_operational": 2, 00:18:01.647 "base_bdevs_list": [ 00:18:01.647 { 00:18:01.647 "name": "spare", 00:18:01.647 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:18:01.647 "is_configured": true, 00:18:01.647 "data_offset": 256, 00:18:01.647 "data_size": 7936 00:18:01.647 }, 00:18:01.647 { 00:18:01.647 "name": "BaseBdev2", 00:18:01.647 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:01.647 "is_configured": true, 00:18:01.647 "data_offset": 256, 00:18:01.647 "data_size": 7936 00:18:01.647 } 00:18:01.647 ] 00:18:01.647 }' 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.647 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.908 [2024-09-29 21:49:20.823035] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.908 [2024-09-29 21:49:20.823086] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.908 [2024-09-29 21:49:20.823158] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.908 [2024-09-29 21:49:20.823221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.908 [2024-09-29 21:49:20.823234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:01.908 21:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:02.168 /dev/nbd0 00:18:02.168 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:02.168 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:02.168 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:02.168 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:02.168 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:02.168 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.169 1+0 records in 00:18:02.169 1+0 records out 00:18:02.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398438 s, 10.3 MB/s 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.169 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:02.429 /dev/nbd1 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.429 1+0 records in 00:18:02.429 1+0 records out 00:18:02.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407749 s, 10.0 MB/s 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:02.429 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:02.689 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:02.689 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.689 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:02.690 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.690 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:02.690 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.690 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.950 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.210 [2024-09-29 21:49:21.985161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:03.210 [2024-09-29 21:49:21.985214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.210 [2024-09-29 21:49:21.985249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:03.210 [2024-09-29 21:49:21.985257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.210 [2024-09-29 21:49:21.987085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.210 [2024-09-29 21:49:21.987115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:03.210 [2024-09-29 21:49:21.987166] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:03.210 [2024-09-29 21:49:21.987232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.210 [2024-09-29 21:49:21.987351] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.210 spare 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.210 21:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.210 [2024-09-29 21:49:22.087237] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:03.210 [2024-09-29 21:49:22.087264] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:03.210 [2024-09-29 21:49:22.087342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:03.210 [2024-09-29 21:49:22.087464] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:03.210 [2024-09-29 21:49:22.087473] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:03.210 [2024-09-29 21:49:22.087576] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.210 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.210 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.211 "name": "raid_bdev1", 00:18:03.211 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:03.211 "strip_size_kb": 0, 00:18:03.211 "state": "online", 00:18:03.211 "raid_level": "raid1", 00:18:03.211 "superblock": true, 00:18:03.211 "num_base_bdevs": 2, 00:18:03.211 "num_base_bdevs_discovered": 2, 00:18:03.211 "num_base_bdevs_operational": 2, 00:18:03.211 "base_bdevs_list": [ 00:18:03.211 { 00:18:03.211 "name": "spare", 00:18:03.211 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:18:03.211 "is_configured": true, 00:18:03.211 "data_offset": 256, 00:18:03.211 "data_size": 7936 00:18:03.211 }, 00:18:03.211 { 00:18:03.211 "name": "BaseBdev2", 00:18:03.211 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:03.211 "is_configured": true, 00:18:03.211 "data_offset": 256, 00:18:03.211 "data_size": 7936 00:18:03.211 } 00:18:03.211 ] 00:18:03.211 }' 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.211 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.781 "name": "raid_bdev1", 00:18:03.781 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:03.781 "strip_size_kb": 0, 00:18:03.781 "state": "online", 00:18:03.781 "raid_level": "raid1", 00:18:03.781 "superblock": true, 00:18:03.781 "num_base_bdevs": 2, 00:18:03.781 "num_base_bdevs_discovered": 2, 00:18:03.781 "num_base_bdevs_operational": 2, 00:18:03.781 "base_bdevs_list": [ 00:18:03.781 { 00:18:03.781 "name": "spare", 00:18:03.781 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:18:03.781 "is_configured": true, 00:18:03.781 "data_offset": 256, 00:18:03.781 "data_size": 7936 00:18:03.781 }, 00:18:03.781 { 00:18:03.781 "name": "BaseBdev2", 00:18:03.781 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:03.781 "is_configured": true, 00:18:03.781 "data_offset": 256, 00:18:03.781 "data_size": 7936 00:18:03.781 } 00:18:03.781 ] 00:18:03.781 }' 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.781 [2024-09-29 21:49:22.688086] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.781 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.781 "name": "raid_bdev1", 00:18:03.781 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:03.781 "strip_size_kb": 0, 00:18:03.781 "state": "online", 00:18:03.782 "raid_level": "raid1", 00:18:03.782 "superblock": true, 00:18:03.782 "num_base_bdevs": 2, 00:18:03.782 "num_base_bdevs_discovered": 1, 00:18:03.782 "num_base_bdevs_operational": 1, 00:18:03.782 "base_bdevs_list": [ 00:18:03.782 { 00:18:03.782 "name": null, 00:18:03.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.782 "is_configured": false, 00:18:03.782 "data_offset": 0, 00:18:03.782 "data_size": 7936 00:18:03.782 }, 00:18:03.782 { 00:18:03.782 "name": "BaseBdev2", 00:18:03.782 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:03.782 "is_configured": true, 00:18:03.782 "data_offset": 256, 00:18:03.782 "data_size": 7936 00:18:03.782 } 00:18:03.782 ] 00:18:03.782 }' 00:18:03.782 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.782 21:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.352 21:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.352 21:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.352 21:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.352 [2024-09-29 21:49:23.151286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.352 [2024-09-29 21:49:23.151442] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.352 [2024-09-29 21:49:23.151465] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:04.352 [2024-09-29 21:49:23.151497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.352 [2024-09-29 21:49:23.164873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:04.352 21:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.352 21:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:04.352 [2024-09-29 21:49:23.166605] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.292 "name": "raid_bdev1", 00:18:05.292 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:05.292 "strip_size_kb": 0, 00:18:05.292 "state": "online", 00:18:05.292 "raid_level": "raid1", 00:18:05.292 "superblock": true, 00:18:05.292 "num_base_bdevs": 2, 00:18:05.292 "num_base_bdevs_discovered": 2, 00:18:05.292 "num_base_bdevs_operational": 2, 00:18:05.292 "process": { 00:18:05.292 "type": "rebuild", 00:18:05.292 "target": "spare", 00:18:05.292 "progress": { 00:18:05.292 "blocks": 2560, 00:18:05.292 "percent": 32 00:18:05.292 } 00:18:05.292 }, 00:18:05.292 "base_bdevs_list": [ 00:18:05.292 { 00:18:05.292 "name": "spare", 00:18:05.292 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:18:05.292 "is_configured": true, 00:18:05.292 "data_offset": 256, 00:18:05.292 "data_size": 7936 00:18:05.292 }, 00:18:05.292 { 00:18:05.292 "name": "BaseBdev2", 00:18:05.292 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:05.292 "is_configured": true, 00:18:05.292 "data_offset": 256, 00:18:05.292 "data_size": 7936 00:18:05.292 } 00:18:05.292 ] 00:18:05.292 }' 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.292 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.553 [2024-09-29 21:49:24.311363] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.553 [2024-09-29 21:49:24.371176] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.553 [2024-09-29 21:49:24.371240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.553 [2024-09-29 21:49:24.371253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.553 [2024-09-29 21:49:24.371262] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.553 "name": "raid_bdev1", 00:18:05.553 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:05.553 "strip_size_kb": 0, 00:18:05.553 "state": "online", 00:18:05.553 "raid_level": "raid1", 00:18:05.553 "superblock": true, 00:18:05.553 "num_base_bdevs": 2, 00:18:05.553 "num_base_bdevs_discovered": 1, 00:18:05.553 "num_base_bdevs_operational": 1, 00:18:05.553 "base_bdevs_list": [ 00:18:05.553 { 00:18:05.553 "name": null, 00:18:05.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.553 "is_configured": false, 00:18:05.553 "data_offset": 0, 00:18:05.553 "data_size": 7936 00:18:05.553 }, 00:18:05.553 { 00:18:05.553 "name": "BaseBdev2", 00:18:05.553 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:05.553 "is_configured": true, 00:18:05.553 "data_offset": 256, 00:18:05.553 "data_size": 7936 00:18:05.553 } 00:18:05.553 ] 00:18:05.553 }' 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.553 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.124 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.124 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.124 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.124 [2024-09-29 21:49:24.876488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.124 [2024-09-29 21:49:24.876566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.124 [2024-09-29 21:49:24.876590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:06.124 [2024-09-29 21:49:24.876600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.124 [2024-09-29 21:49:24.876838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.124 [2024-09-29 21:49:24.876863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.124 [2024-09-29 21:49:24.876915] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:06.124 [2024-09-29 21:49:24.876928] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:06.124 [2024-09-29 21:49:24.876942] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:06.124 [2024-09-29 21:49:24.876961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.124 [2024-09-29 21:49:24.889769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:06.124 spare 00:18:06.124 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.124 21:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:06.124 [2024-09-29 21:49:24.891498] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.064 "name": "raid_bdev1", 00:18:07.064 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:07.064 "strip_size_kb": 0, 00:18:07.064 "state": "online", 00:18:07.064 "raid_level": "raid1", 00:18:07.064 "superblock": true, 00:18:07.064 "num_base_bdevs": 2, 00:18:07.064 "num_base_bdevs_discovered": 2, 00:18:07.064 "num_base_bdevs_operational": 2, 00:18:07.064 "process": { 00:18:07.064 "type": "rebuild", 00:18:07.064 "target": "spare", 00:18:07.064 "progress": { 00:18:07.064 "blocks": 2560, 00:18:07.064 "percent": 32 00:18:07.064 } 00:18:07.064 }, 00:18:07.064 "base_bdevs_list": [ 00:18:07.064 { 00:18:07.064 "name": "spare", 00:18:07.064 "uuid": "7791b74c-5c0a-536f-a411-0a4c177028ad", 00:18:07.064 "is_configured": true, 00:18:07.064 "data_offset": 256, 00:18:07.064 "data_size": 7936 00:18:07.064 }, 00:18:07.064 { 00:18:07.064 "name": "BaseBdev2", 00:18:07.064 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:07.064 "is_configured": true, 00:18:07.064 "data_offset": 256, 00:18:07.064 "data_size": 7936 00:18:07.064 } 00:18:07.064 ] 00:18:07.064 }' 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.064 21:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.064 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.064 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:07.064 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.064 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.064 [2024-09-29 21:49:26.031701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.324 [2024-09-29 21:49:26.096048] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:07.324 [2024-09-29 21:49:26.096102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.324 [2024-09-29 21:49:26.096118] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.324 [2024-09-29 21:49:26.096125] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:07.324 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.324 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.324 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.324 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.325 "name": "raid_bdev1", 00:18:07.325 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:07.325 "strip_size_kb": 0, 00:18:07.325 "state": "online", 00:18:07.325 "raid_level": "raid1", 00:18:07.325 "superblock": true, 00:18:07.325 "num_base_bdevs": 2, 00:18:07.325 "num_base_bdevs_discovered": 1, 00:18:07.325 "num_base_bdevs_operational": 1, 00:18:07.325 "base_bdevs_list": [ 00:18:07.325 { 00:18:07.325 "name": null, 00:18:07.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.325 "is_configured": false, 00:18:07.325 "data_offset": 0, 00:18:07.325 "data_size": 7936 00:18:07.325 }, 00:18:07.325 { 00:18:07.325 "name": "BaseBdev2", 00:18:07.325 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:07.325 "is_configured": true, 00:18:07.325 "data_offset": 256, 00:18:07.325 "data_size": 7936 00:18:07.325 } 00:18:07.325 ] 00:18:07.325 }' 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.325 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.585 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.585 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.585 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.585 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.585 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.585 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.585 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.585 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.585 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.845 "name": "raid_bdev1", 00:18:07.845 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:07.845 "strip_size_kb": 0, 00:18:07.845 "state": "online", 00:18:07.845 "raid_level": "raid1", 00:18:07.845 "superblock": true, 00:18:07.845 "num_base_bdevs": 2, 00:18:07.845 "num_base_bdevs_discovered": 1, 00:18:07.845 "num_base_bdevs_operational": 1, 00:18:07.845 "base_bdevs_list": [ 00:18:07.845 { 00:18:07.845 "name": null, 00:18:07.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.845 "is_configured": false, 00:18:07.845 "data_offset": 0, 00:18:07.845 "data_size": 7936 00:18:07.845 }, 00:18:07.845 { 00:18:07.845 "name": "BaseBdev2", 00:18:07.845 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:07.845 "is_configured": true, 00:18:07.845 "data_offset": 256, 00:18:07.845 "data_size": 7936 00:18:07.845 } 00:18:07.845 ] 00:18:07.845 }' 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.845 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.846 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:07.846 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.846 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.846 [2024-09-29 21:49:26.717979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:07.846 [2024-09-29 21:49:26.718042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.846 [2024-09-29 21:49:26.718065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:07.846 [2024-09-29 21:49:26.718074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.846 [2024-09-29 21:49:26.718273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.846 [2024-09-29 21:49:26.718290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:07.846 [2024-09-29 21:49:26.718336] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:07.846 [2024-09-29 21:49:26.718348] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.846 [2024-09-29 21:49:26.718358] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:07.846 [2024-09-29 21:49:26.718368] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:07.846 BaseBdev1 00:18:07.846 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.846 21:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:08.786 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:08.786 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.787 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.047 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.047 "name": "raid_bdev1", 00:18:09.047 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:09.047 "strip_size_kb": 0, 00:18:09.047 "state": "online", 00:18:09.047 "raid_level": "raid1", 00:18:09.047 "superblock": true, 00:18:09.047 "num_base_bdevs": 2, 00:18:09.047 "num_base_bdevs_discovered": 1, 00:18:09.047 "num_base_bdevs_operational": 1, 00:18:09.047 "base_bdevs_list": [ 00:18:09.047 { 00:18:09.047 "name": null, 00:18:09.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.047 "is_configured": false, 00:18:09.047 "data_offset": 0, 00:18:09.047 "data_size": 7936 00:18:09.047 }, 00:18:09.047 { 00:18:09.047 "name": "BaseBdev2", 00:18:09.047 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:09.047 "is_configured": true, 00:18:09.047 "data_offset": 256, 00:18:09.047 "data_size": 7936 00:18:09.047 } 00:18:09.047 ] 00:18:09.047 }' 00:18:09.047 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.047 21:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.307 "name": "raid_bdev1", 00:18:09.307 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:09.307 "strip_size_kb": 0, 00:18:09.307 "state": "online", 00:18:09.307 "raid_level": "raid1", 00:18:09.307 "superblock": true, 00:18:09.307 "num_base_bdevs": 2, 00:18:09.307 "num_base_bdevs_discovered": 1, 00:18:09.307 "num_base_bdevs_operational": 1, 00:18:09.307 "base_bdevs_list": [ 00:18:09.307 { 00:18:09.307 "name": null, 00:18:09.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.307 "is_configured": false, 00:18:09.307 "data_offset": 0, 00:18:09.307 "data_size": 7936 00:18:09.307 }, 00:18:09.307 { 00:18:09.307 "name": "BaseBdev2", 00:18:09.307 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:09.307 "is_configured": true, 00:18:09.307 "data_offset": 256, 00:18:09.307 "data_size": 7936 00:18:09.307 } 00:18:09.307 ] 00:18:09.307 }' 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.307 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.308 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.308 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:09.308 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.308 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.568 [2024-09-29 21:49:28.299306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.568 [2024-09-29 21:49:28.299458] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:09.568 [2024-09-29 21:49:28.299473] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:09.568 request: 00:18:09.568 { 00:18:09.568 "base_bdev": "BaseBdev1", 00:18:09.568 "raid_bdev": "raid_bdev1", 00:18:09.568 "method": "bdev_raid_add_base_bdev", 00:18:09.568 "req_id": 1 00:18:09.568 } 00:18:09.568 Got JSON-RPC error response 00:18:09.568 response: 00:18:09.568 { 00:18:09.568 "code": -22, 00:18:09.568 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:09.568 } 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.568 21:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.520 "name": "raid_bdev1", 00:18:10.520 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:10.520 "strip_size_kb": 0, 00:18:10.520 "state": "online", 00:18:10.520 "raid_level": "raid1", 00:18:10.520 "superblock": true, 00:18:10.520 "num_base_bdevs": 2, 00:18:10.520 "num_base_bdevs_discovered": 1, 00:18:10.520 "num_base_bdevs_operational": 1, 00:18:10.520 "base_bdevs_list": [ 00:18:10.520 { 00:18:10.520 "name": null, 00:18:10.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.520 "is_configured": false, 00:18:10.520 "data_offset": 0, 00:18:10.520 "data_size": 7936 00:18:10.520 }, 00:18:10.520 { 00:18:10.520 "name": "BaseBdev2", 00:18:10.520 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:10.520 "is_configured": true, 00:18:10.520 "data_offset": 256, 00:18:10.520 "data_size": 7936 00:18:10.520 } 00:18:10.520 ] 00:18:10.520 }' 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.520 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.780 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.041 "name": "raid_bdev1", 00:18:11.041 "uuid": "411fd2cb-0753-4403-8a04-e685ce249b8d", 00:18:11.041 "strip_size_kb": 0, 00:18:11.041 "state": "online", 00:18:11.041 "raid_level": "raid1", 00:18:11.041 "superblock": true, 00:18:11.041 "num_base_bdevs": 2, 00:18:11.041 "num_base_bdevs_discovered": 1, 00:18:11.041 "num_base_bdevs_operational": 1, 00:18:11.041 "base_bdevs_list": [ 00:18:11.041 { 00:18:11.041 "name": null, 00:18:11.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.041 "is_configured": false, 00:18:11.041 "data_offset": 0, 00:18:11.041 "data_size": 7936 00:18:11.041 }, 00:18:11.041 { 00:18:11.041 "name": "BaseBdev2", 00:18:11.041 "uuid": "0838b72e-6c9a-550e-911b-2858c8c1fed3", 00:18:11.041 "is_configured": true, 00:18:11.041 "data_offset": 256, 00:18:11.041 "data_size": 7936 00:18:11.041 } 00:18:11.041 ] 00:18:11.041 }' 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87812 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87812 ']' 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87812 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87812 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.041 killing process with pid 87812 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87812' 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87812 00:18:11.041 Received shutdown signal, test time was about 60.000000 seconds 00:18:11.041 00:18:11.041 Latency(us) 00:18:11.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.041 =================================================================================================================== 00:18:11.041 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.041 [2024-09-29 21:49:29.906698] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.041 [2024-09-29 21:49:29.906819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.041 21:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87812 00:18:11.041 [2024-09-29 21:49:29.906870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.041 [2024-09-29 21:49:29.906881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:11.301 [2024-09-29 21:49:30.208753] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.682 21:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:12.682 00:18:12.682 real 0m19.681s 00:18:12.682 user 0m25.652s 00:18:12.682 sys 0m2.669s 00:18:12.682 21:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.682 21:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.682 ************************************ 00:18:12.682 END TEST raid_rebuild_test_sb_md_separate 00:18:12.682 ************************************ 00:18:12.682 21:49:31 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:12.682 21:49:31 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:12.682 21:49:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:12.682 21:49:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.682 21:49:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.682 ************************************ 00:18:12.682 START TEST raid_state_function_test_sb_md_interleaved 00:18:12.682 ************************************ 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88498 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88498' 00:18:12.682 Process raid pid: 88498 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88498 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88498 ']' 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.682 21:49:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.682 [2024-09-29 21:49:31.528127] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:12.682 [2024-09-29 21:49:31.528248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.942 [2024-09-29 21:49:31.693825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.942 [2024-09-29 21:49:31.880296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.203 [2024-09-29 21:49:32.082470] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.203 [2024-09-29 21:49:32.082507] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.464 [2024-09-29 21:49:32.349108] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:13.464 [2024-09-29 21:49:32.349162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:13.464 [2024-09-29 21:49:32.349171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:13.464 [2024-09-29 21:49:32.349196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.464 "name": "Existed_Raid", 00:18:13.464 "uuid": "cd199140-3eda-49f0-85b1-ed2d7b7b137c", 00:18:13.464 "strip_size_kb": 0, 00:18:13.464 "state": "configuring", 00:18:13.464 "raid_level": "raid1", 00:18:13.464 "superblock": true, 00:18:13.464 "num_base_bdevs": 2, 00:18:13.464 "num_base_bdevs_discovered": 0, 00:18:13.464 "num_base_bdevs_operational": 2, 00:18:13.464 "base_bdevs_list": [ 00:18:13.464 { 00:18:13.464 "name": "BaseBdev1", 00:18:13.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.464 "is_configured": false, 00:18:13.464 "data_offset": 0, 00:18:13.464 "data_size": 0 00:18:13.464 }, 00:18:13.464 { 00:18:13.464 "name": "BaseBdev2", 00:18:13.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.464 "is_configured": false, 00:18:13.464 "data_offset": 0, 00:18:13.464 "data_size": 0 00:18:13.464 } 00:18:13.464 ] 00:18:13.464 }' 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.464 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.035 [2024-09-29 21:49:32.836182] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.035 [2024-09-29 21:49:32.836219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.035 [2024-09-29 21:49:32.844184] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.035 [2024-09-29 21:49:32.844234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.035 [2024-09-29 21:49:32.844259] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.035 [2024-09-29 21:49:32.844270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.035 [2024-09-29 21:49:32.916181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.035 BaseBdev1 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.035 [ 00:18:14.035 { 00:18:14.035 "name": "BaseBdev1", 00:18:14.035 "aliases": [ 00:18:14.035 "21d99d17-77e1-43b7-b9b3-574bd4944d09" 00:18:14.035 ], 00:18:14.035 "product_name": "Malloc disk", 00:18:14.035 "block_size": 4128, 00:18:14.035 "num_blocks": 8192, 00:18:14.035 "uuid": "21d99d17-77e1-43b7-b9b3-574bd4944d09", 00:18:14.035 "md_size": 32, 00:18:14.035 "md_interleave": true, 00:18:14.035 "dif_type": 0, 00:18:14.035 "assigned_rate_limits": { 00:18:14.035 "rw_ios_per_sec": 0, 00:18:14.035 "rw_mbytes_per_sec": 0, 00:18:14.035 "r_mbytes_per_sec": 0, 00:18:14.035 "w_mbytes_per_sec": 0 00:18:14.035 }, 00:18:14.035 "claimed": true, 00:18:14.035 "claim_type": "exclusive_write", 00:18:14.035 "zoned": false, 00:18:14.035 "supported_io_types": { 00:18:14.035 "read": true, 00:18:14.035 "write": true, 00:18:14.035 "unmap": true, 00:18:14.035 "flush": true, 00:18:14.035 "reset": true, 00:18:14.035 "nvme_admin": false, 00:18:14.035 "nvme_io": false, 00:18:14.035 "nvme_io_md": false, 00:18:14.035 "write_zeroes": true, 00:18:14.035 "zcopy": true, 00:18:14.035 "get_zone_info": false, 00:18:14.035 "zone_management": false, 00:18:14.035 "zone_append": false, 00:18:14.035 "compare": false, 00:18:14.035 "compare_and_write": false, 00:18:14.035 "abort": true, 00:18:14.035 "seek_hole": false, 00:18:14.035 "seek_data": false, 00:18:14.035 "copy": true, 00:18:14.035 "nvme_iov_md": false 00:18:14.035 }, 00:18:14.035 "memory_domains": [ 00:18:14.035 { 00:18:14.035 "dma_device_id": "system", 00:18:14.035 "dma_device_type": 1 00:18:14.035 }, 00:18:14.035 { 00:18:14.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.035 "dma_device_type": 2 00:18:14.035 } 00:18:14.035 ], 00:18:14.035 "driver_specific": {} 00:18:14.035 } 00:18:14.035 ] 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.035 21:49:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.035 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.035 "name": "Existed_Raid", 00:18:14.035 "uuid": "00cf7a8a-f7da-4855-a9d6-89f646207ff2", 00:18:14.035 "strip_size_kb": 0, 00:18:14.035 "state": "configuring", 00:18:14.035 "raid_level": "raid1", 00:18:14.035 "superblock": true, 00:18:14.035 "num_base_bdevs": 2, 00:18:14.035 "num_base_bdevs_discovered": 1, 00:18:14.035 "num_base_bdevs_operational": 2, 00:18:14.035 "base_bdevs_list": [ 00:18:14.035 { 00:18:14.035 "name": "BaseBdev1", 00:18:14.035 "uuid": "21d99d17-77e1-43b7-b9b3-574bd4944d09", 00:18:14.035 "is_configured": true, 00:18:14.035 "data_offset": 256, 00:18:14.035 "data_size": 7936 00:18:14.035 }, 00:18:14.035 { 00:18:14.035 "name": "BaseBdev2", 00:18:14.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.035 "is_configured": false, 00:18:14.035 "data_offset": 0, 00:18:14.035 "data_size": 0 00:18:14.035 } 00:18:14.035 ] 00:18:14.035 }' 00:18:14.035 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.035 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.605 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:14.605 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.605 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.605 [2024-09-29 21:49:33.403403] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.605 [2024-09-29 21:49:33.403442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:14.605 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.605 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:14.605 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.605 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.606 [2024-09-29 21:49:33.415430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.606 [2024-09-29 21:49:33.417181] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.606 [2024-09-29 21:49:33.417222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.606 "name": "Existed_Raid", 00:18:14.606 "uuid": "9165f5c8-2e70-4849-9ae6-f434f6ea0e25", 00:18:14.606 "strip_size_kb": 0, 00:18:14.606 "state": "configuring", 00:18:14.606 "raid_level": "raid1", 00:18:14.606 "superblock": true, 00:18:14.606 "num_base_bdevs": 2, 00:18:14.606 "num_base_bdevs_discovered": 1, 00:18:14.606 "num_base_bdevs_operational": 2, 00:18:14.606 "base_bdevs_list": [ 00:18:14.606 { 00:18:14.606 "name": "BaseBdev1", 00:18:14.606 "uuid": "21d99d17-77e1-43b7-b9b3-574bd4944d09", 00:18:14.606 "is_configured": true, 00:18:14.606 "data_offset": 256, 00:18:14.606 "data_size": 7936 00:18:14.606 }, 00:18:14.606 { 00:18:14.606 "name": "BaseBdev2", 00:18:14.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.606 "is_configured": false, 00:18:14.606 "data_offset": 0, 00:18:14.606 "data_size": 0 00:18:14.606 } 00:18:14.606 ] 00:18:14.606 }' 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.606 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.866 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:14.866 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.866 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.126 [2024-09-29 21:49:33.872809] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.126 [2024-09-29 21:49:33.873007] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:15.126 [2024-09-29 21:49:33.873020] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:15.126 [2024-09-29 21:49:33.873119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:15.126 [2024-09-29 21:49:33.873188] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:15.126 [2024-09-29 21:49:33.873203] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:15.126 [2024-09-29 21:49:33.873262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.126 BaseBdev2 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.126 [ 00:18:15.126 { 00:18:15.126 "name": "BaseBdev2", 00:18:15.126 "aliases": [ 00:18:15.126 "88a1d7e1-638e-402a-91df-d3403e58bdbf" 00:18:15.126 ], 00:18:15.126 "product_name": "Malloc disk", 00:18:15.126 "block_size": 4128, 00:18:15.126 "num_blocks": 8192, 00:18:15.126 "uuid": "88a1d7e1-638e-402a-91df-d3403e58bdbf", 00:18:15.126 "md_size": 32, 00:18:15.126 "md_interleave": true, 00:18:15.126 "dif_type": 0, 00:18:15.126 "assigned_rate_limits": { 00:18:15.126 "rw_ios_per_sec": 0, 00:18:15.126 "rw_mbytes_per_sec": 0, 00:18:15.126 "r_mbytes_per_sec": 0, 00:18:15.126 "w_mbytes_per_sec": 0 00:18:15.126 }, 00:18:15.126 "claimed": true, 00:18:15.126 "claim_type": "exclusive_write", 00:18:15.126 "zoned": false, 00:18:15.126 "supported_io_types": { 00:18:15.126 "read": true, 00:18:15.126 "write": true, 00:18:15.126 "unmap": true, 00:18:15.126 "flush": true, 00:18:15.126 "reset": true, 00:18:15.126 "nvme_admin": false, 00:18:15.126 "nvme_io": false, 00:18:15.126 "nvme_io_md": false, 00:18:15.126 "write_zeroes": true, 00:18:15.126 "zcopy": true, 00:18:15.126 "get_zone_info": false, 00:18:15.126 "zone_management": false, 00:18:15.126 "zone_append": false, 00:18:15.126 "compare": false, 00:18:15.126 "compare_and_write": false, 00:18:15.126 "abort": true, 00:18:15.126 "seek_hole": false, 00:18:15.126 "seek_data": false, 00:18:15.126 "copy": true, 00:18:15.126 "nvme_iov_md": false 00:18:15.126 }, 00:18:15.126 "memory_domains": [ 00:18:15.126 { 00:18:15.126 "dma_device_id": "system", 00:18:15.126 "dma_device_type": 1 00:18:15.126 }, 00:18:15.126 { 00:18:15.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.126 "dma_device_type": 2 00:18:15.126 } 00:18:15.126 ], 00:18:15.126 "driver_specific": {} 00:18:15.126 } 00:18:15.126 ] 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.126 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.126 "name": "Existed_Raid", 00:18:15.126 "uuid": "9165f5c8-2e70-4849-9ae6-f434f6ea0e25", 00:18:15.126 "strip_size_kb": 0, 00:18:15.126 "state": "online", 00:18:15.126 "raid_level": "raid1", 00:18:15.126 "superblock": true, 00:18:15.126 "num_base_bdevs": 2, 00:18:15.126 "num_base_bdevs_discovered": 2, 00:18:15.126 "num_base_bdevs_operational": 2, 00:18:15.126 "base_bdevs_list": [ 00:18:15.126 { 00:18:15.126 "name": "BaseBdev1", 00:18:15.126 "uuid": "21d99d17-77e1-43b7-b9b3-574bd4944d09", 00:18:15.126 "is_configured": true, 00:18:15.126 "data_offset": 256, 00:18:15.126 "data_size": 7936 00:18:15.126 }, 00:18:15.126 { 00:18:15.126 "name": "BaseBdev2", 00:18:15.126 "uuid": "88a1d7e1-638e-402a-91df-d3403e58bdbf", 00:18:15.127 "is_configured": true, 00:18:15.127 "data_offset": 256, 00:18:15.127 "data_size": 7936 00:18:15.127 } 00:18:15.127 ] 00:18:15.127 }' 00:18:15.127 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.127 21:49:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:15.697 [2024-09-29 21:49:34.384275] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.697 "name": "Existed_Raid", 00:18:15.697 "aliases": [ 00:18:15.697 "9165f5c8-2e70-4849-9ae6-f434f6ea0e25" 00:18:15.697 ], 00:18:15.697 "product_name": "Raid Volume", 00:18:15.697 "block_size": 4128, 00:18:15.697 "num_blocks": 7936, 00:18:15.697 "uuid": "9165f5c8-2e70-4849-9ae6-f434f6ea0e25", 00:18:15.697 "md_size": 32, 00:18:15.697 "md_interleave": true, 00:18:15.697 "dif_type": 0, 00:18:15.697 "assigned_rate_limits": { 00:18:15.697 "rw_ios_per_sec": 0, 00:18:15.697 "rw_mbytes_per_sec": 0, 00:18:15.697 "r_mbytes_per_sec": 0, 00:18:15.697 "w_mbytes_per_sec": 0 00:18:15.697 }, 00:18:15.697 "claimed": false, 00:18:15.697 "zoned": false, 00:18:15.697 "supported_io_types": { 00:18:15.697 "read": true, 00:18:15.697 "write": true, 00:18:15.697 "unmap": false, 00:18:15.697 "flush": false, 00:18:15.697 "reset": true, 00:18:15.697 "nvme_admin": false, 00:18:15.697 "nvme_io": false, 00:18:15.697 "nvme_io_md": false, 00:18:15.697 "write_zeroes": true, 00:18:15.697 "zcopy": false, 00:18:15.697 "get_zone_info": false, 00:18:15.697 "zone_management": false, 00:18:15.697 "zone_append": false, 00:18:15.697 "compare": false, 00:18:15.697 "compare_and_write": false, 00:18:15.697 "abort": false, 00:18:15.697 "seek_hole": false, 00:18:15.697 "seek_data": false, 00:18:15.697 "copy": false, 00:18:15.697 "nvme_iov_md": false 00:18:15.697 }, 00:18:15.697 "memory_domains": [ 00:18:15.697 { 00:18:15.697 "dma_device_id": "system", 00:18:15.697 "dma_device_type": 1 00:18:15.697 }, 00:18:15.697 { 00:18:15.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.697 "dma_device_type": 2 00:18:15.697 }, 00:18:15.697 { 00:18:15.697 "dma_device_id": "system", 00:18:15.697 "dma_device_type": 1 00:18:15.697 }, 00:18:15.697 { 00:18:15.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.697 "dma_device_type": 2 00:18:15.697 } 00:18:15.697 ], 00:18:15.697 "driver_specific": { 00:18:15.697 "raid": { 00:18:15.697 "uuid": "9165f5c8-2e70-4849-9ae6-f434f6ea0e25", 00:18:15.697 "strip_size_kb": 0, 00:18:15.697 "state": "online", 00:18:15.697 "raid_level": "raid1", 00:18:15.697 "superblock": true, 00:18:15.697 "num_base_bdevs": 2, 00:18:15.697 "num_base_bdevs_discovered": 2, 00:18:15.697 "num_base_bdevs_operational": 2, 00:18:15.697 "base_bdevs_list": [ 00:18:15.697 { 00:18:15.697 "name": "BaseBdev1", 00:18:15.697 "uuid": "21d99d17-77e1-43b7-b9b3-574bd4944d09", 00:18:15.697 "is_configured": true, 00:18:15.697 "data_offset": 256, 00:18:15.697 "data_size": 7936 00:18:15.697 }, 00:18:15.697 { 00:18:15.697 "name": "BaseBdev2", 00:18:15.697 "uuid": "88a1d7e1-638e-402a-91df-d3403e58bdbf", 00:18:15.697 "is_configured": true, 00:18:15.697 "data_offset": 256, 00:18:15.697 "data_size": 7936 00:18:15.697 } 00:18:15.697 ] 00:18:15.697 } 00:18:15.697 } 00:18:15.697 }' 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:15.697 BaseBdev2' 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.697 [2024-09-29 21:49:34.587685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:15.697 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:15.698 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:15.698 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.698 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.698 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.698 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.698 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.958 "name": "Existed_Raid", 00:18:15.958 "uuid": "9165f5c8-2e70-4849-9ae6-f434f6ea0e25", 00:18:15.958 "strip_size_kb": 0, 00:18:15.958 "state": "online", 00:18:15.958 "raid_level": "raid1", 00:18:15.958 "superblock": true, 00:18:15.958 "num_base_bdevs": 2, 00:18:15.958 "num_base_bdevs_discovered": 1, 00:18:15.958 "num_base_bdevs_operational": 1, 00:18:15.958 "base_bdevs_list": [ 00:18:15.958 { 00:18:15.958 "name": null, 00:18:15.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.958 "is_configured": false, 00:18:15.958 "data_offset": 0, 00:18:15.958 "data_size": 7936 00:18:15.958 }, 00:18:15.958 { 00:18:15.958 "name": "BaseBdev2", 00:18:15.958 "uuid": "88a1d7e1-638e-402a-91df-d3403e58bdbf", 00:18:15.958 "is_configured": true, 00:18:15.958 "data_offset": 256, 00:18:15.958 "data_size": 7936 00:18:15.958 } 00:18:15.958 ] 00:18:15.958 }' 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.958 21:49:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.218 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.218 [2024-09-29 21:49:35.186658] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:16.218 [2024-09-29 21:49:35.186761] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.478 [2024-09-29 21:49:35.275073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.478 [2024-09-29 21:49:35.275126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.478 [2024-09-29 21:49:35.275139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88498 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88498 ']' 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88498 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88498 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:16.478 killing process with pid 88498 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88498' 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88498 00:18:16.478 [2024-09-29 21:49:35.343509] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:16.478 21:49:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88498 00:18:16.478 [2024-09-29 21:49:35.358958] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.860 21:49:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:17.860 00:18:17.860 real 0m5.122s 00:18:17.860 user 0m7.288s 00:18:17.860 sys 0m0.896s 00:18:17.860 21:49:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:17.860 21:49:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.860 ************************************ 00:18:17.860 END TEST raid_state_function_test_sb_md_interleaved 00:18:17.860 ************************************ 00:18:17.860 21:49:36 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:17.860 21:49:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:17.860 21:49:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:17.860 21:49:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.860 ************************************ 00:18:17.860 START TEST raid_superblock_test_md_interleaved 00:18:17.860 ************************************ 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88750 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88750 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88750 ']' 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.860 21:49:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.860 [2024-09-29 21:49:36.727570] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:17.860 [2024-09-29 21:49:36.727695] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88750 ] 00:18:18.120 [2024-09-29 21:49:36.896856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.120 [2024-09-29 21:49:37.088161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.379 [2024-09-29 21:49:37.277966] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.379 [2024-09-29 21:49:37.277995] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.639 malloc1 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.639 [2024-09-29 21:49:37.570464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.639 [2024-09-29 21:49:37.570517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.639 [2024-09-29 21:49:37.570540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:18.639 [2024-09-29 21:49:37.570548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.639 [2024-09-29 21:49:37.572246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.639 [2024-09-29 21:49:37.572282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.639 pt1 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.639 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.899 malloc2 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.899 [2024-09-29 21:49:37.653127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:18.899 [2024-09-29 21:49:37.653179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.899 [2024-09-29 21:49:37.653200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:18.899 [2024-09-29 21:49:37.653209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.899 [2024-09-29 21:49:37.654885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.899 [2024-09-29 21:49:37.654920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:18.899 pt2 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.899 [2024-09-29 21:49:37.665185] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.899 [2024-09-29 21:49:37.666799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:18.899 [2024-09-29 21:49:37.666972] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:18.899 [2024-09-29 21:49:37.666987] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:18.899 [2024-09-29 21:49:37.667064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:18.899 [2024-09-29 21:49:37.667126] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:18.899 [2024-09-29 21:49:37.667139] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:18.899 [2024-09-29 21:49:37.667202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.899 "name": "raid_bdev1", 00:18:18.899 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:18.899 "strip_size_kb": 0, 00:18:18.899 "state": "online", 00:18:18.899 "raid_level": "raid1", 00:18:18.899 "superblock": true, 00:18:18.899 "num_base_bdevs": 2, 00:18:18.899 "num_base_bdevs_discovered": 2, 00:18:18.899 "num_base_bdevs_operational": 2, 00:18:18.899 "base_bdevs_list": [ 00:18:18.899 { 00:18:18.899 "name": "pt1", 00:18:18.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:18.899 "is_configured": true, 00:18:18.899 "data_offset": 256, 00:18:18.899 "data_size": 7936 00:18:18.899 }, 00:18:18.899 { 00:18:18.899 "name": "pt2", 00:18:18.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:18.899 "is_configured": true, 00:18:18.899 "data_offset": 256, 00:18:18.899 "data_size": 7936 00:18:18.899 } 00:18:18.899 ] 00:18:18.899 }' 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.899 21:49:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.159 [2024-09-29 21:49:38.116580] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.159 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.419 "name": "raid_bdev1", 00:18:19.419 "aliases": [ 00:18:19.419 "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545" 00:18:19.419 ], 00:18:19.419 "product_name": "Raid Volume", 00:18:19.419 "block_size": 4128, 00:18:19.419 "num_blocks": 7936, 00:18:19.419 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:19.419 "md_size": 32, 00:18:19.419 "md_interleave": true, 00:18:19.419 "dif_type": 0, 00:18:19.419 "assigned_rate_limits": { 00:18:19.419 "rw_ios_per_sec": 0, 00:18:19.419 "rw_mbytes_per_sec": 0, 00:18:19.419 "r_mbytes_per_sec": 0, 00:18:19.419 "w_mbytes_per_sec": 0 00:18:19.419 }, 00:18:19.419 "claimed": false, 00:18:19.419 "zoned": false, 00:18:19.419 "supported_io_types": { 00:18:19.419 "read": true, 00:18:19.419 "write": true, 00:18:19.419 "unmap": false, 00:18:19.419 "flush": false, 00:18:19.419 "reset": true, 00:18:19.419 "nvme_admin": false, 00:18:19.419 "nvme_io": false, 00:18:19.419 "nvme_io_md": false, 00:18:19.419 "write_zeroes": true, 00:18:19.419 "zcopy": false, 00:18:19.419 "get_zone_info": false, 00:18:19.419 "zone_management": false, 00:18:19.419 "zone_append": false, 00:18:19.419 "compare": false, 00:18:19.419 "compare_and_write": false, 00:18:19.419 "abort": false, 00:18:19.419 "seek_hole": false, 00:18:19.419 "seek_data": false, 00:18:19.419 "copy": false, 00:18:19.419 "nvme_iov_md": false 00:18:19.419 }, 00:18:19.419 "memory_domains": [ 00:18:19.419 { 00:18:19.419 "dma_device_id": "system", 00:18:19.419 "dma_device_type": 1 00:18:19.419 }, 00:18:19.419 { 00:18:19.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.419 "dma_device_type": 2 00:18:19.419 }, 00:18:19.419 { 00:18:19.419 "dma_device_id": "system", 00:18:19.419 "dma_device_type": 1 00:18:19.419 }, 00:18:19.419 { 00:18:19.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.419 "dma_device_type": 2 00:18:19.419 } 00:18:19.419 ], 00:18:19.419 "driver_specific": { 00:18:19.419 "raid": { 00:18:19.419 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:19.419 "strip_size_kb": 0, 00:18:19.419 "state": "online", 00:18:19.419 "raid_level": "raid1", 00:18:19.419 "superblock": true, 00:18:19.419 "num_base_bdevs": 2, 00:18:19.419 "num_base_bdevs_discovered": 2, 00:18:19.419 "num_base_bdevs_operational": 2, 00:18:19.419 "base_bdevs_list": [ 00:18:19.419 { 00:18:19.419 "name": "pt1", 00:18:19.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.419 "is_configured": true, 00:18:19.419 "data_offset": 256, 00:18:19.419 "data_size": 7936 00:18:19.419 }, 00:18:19.419 { 00:18:19.419 "name": "pt2", 00:18:19.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.419 "is_configured": true, 00:18:19.419 "data_offset": 256, 00:18:19.419 "data_size": 7936 00:18:19.419 } 00:18:19.419 ] 00:18:19.419 } 00:18:19.419 } 00:18:19.419 }' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:19.419 pt2' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.419 [2024-09-29 21:49:38.332268] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9c7d7f6f-3f1b-4a4d-b2d0-009b13766545 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 9c7d7f6f-3f1b-4a4d-b2d0-009b13766545 ']' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.419 [2024-09-29 21:49:38.375923] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.419 [2024-09-29 21:49:38.375945] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.419 [2024-09-29 21:49:38.376007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.419 [2024-09-29 21:49:38.376068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.419 [2024-09-29 21:49:38.376080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:19.419 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.679 [2024-09-29 21:49:38.511720] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:19.679 [2024-09-29 21:49:38.513447] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:19.679 [2024-09-29 21:49:38.513517] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:19.679 [2024-09-29 21:49:38.513561] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:19.679 [2024-09-29 21:49:38.513591] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:19.679 [2024-09-29 21:49:38.513600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:19.679 request: 00:18:19.679 { 00:18:19.679 "name": "raid_bdev1", 00:18:19.679 "raid_level": "raid1", 00:18:19.679 "base_bdevs": [ 00:18:19.679 "malloc1", 00:18:19.679 "malloc2" 00:18:19.679 ], 00:18:19.679 "superblock": false, 00:18:19.679 "method": "bdev_raid_create", 00:18:19.679 "req_id": 1 00:18:19.679 } 00:18:19.679 Got JSON-RPC error response 00:18:19.679 response: 00:18:19.679 { 00:18:19.679 "code": -17, 00:18:19.679 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:19.679 } 00:18:19.679 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.680 [2024-09-29 21:49:38.579571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:19.680 [2024-09-29 21:49:38.579615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.680 [2024-09-29 21:49:38.579629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:19.680 [2024-09-29 21:49:38.579639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.680 [2024-09-29 21:49:38.581350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.680 [2024-09-29 21:49:38.581385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:19.680 [2024-09-29 21:49:38.581424] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:19.680 [2024-09-29 21:49:38.581477] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:19.680 pt1 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.680 "name": "raid_bdev1", 00:18:19.680 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:19.680 "strip_size_kb": 0, 00:18:19.680 "state": "configuring", 00:18:19.680 "raid_level": "raid1", 00:18:19.680 "superblock": true, 00:18:19.680 "num_base_bdevs": 2, 00:18:19.680 "num_base_bdevs_discovered": 1, 00:18:19.680 "num_base_bdevs_operational": 2, 00:18:19.680 "base_bdevs_list": [ 00:18:19.680 { 00:18:19.680 "name": "pt1", 00:18:19.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:19.680 "is_configured": true, 00:18:19.680 "data_offset": 256, 00:18:19.680 "data_size": 7936 00:18:19.680 }, 00:18:19.680 { 00:18:19.680 "name": null, 00:18:19.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:19.680 "is_configured": false, 00:18:19.680 "data_offset": 256, 00:18:19.680 "data_size": 7936 00:18:19.680 } 00:18:19.680 ] 00:18:19.680 }' 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.680 21:49:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.248 [2024-09-29 21:49:39.050757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:20.248 [2024-09-29 21:49:39.050812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.248 [2024-09-29 21:49:39.050846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:20.248 [2024-09-29 21:49:39.050856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.248 [2024-09-29 21:49:39.050960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.248 [2024-09-29 21:49:39.050982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:20.248 [2024-09-29 21:49:39.051016] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:20.248 [2024-09-29 21:49:39.051059] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:20.248 [2024-09-29 21:49:39.051135] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:20.248 [2024-09-29 21:49:39.051150] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:20.248 [2024-09-29 21:49:39.051210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:20.248 [2024-09-29 21:49:39.051270] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:20.248 [2024-09-29 21:49:39.051281] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:20.248 [2024-09-29 21:49:39.051331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.248 pt2 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.248 "name": "raid_bdev1", 00:18:20.248 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:20.248 "strip_size_kb": 0, 00:18:20.248 "state": "online", 00:18:20.248 "raid_level": "raid1", 00:18:20.248 "superblock": true, 00:18:20.248 "num_base_bdevs": 2, 00:18:20.248 "num_base_bdevs_discovered": 2, 00:18:20.248 "num_base_bdevs_operational": 2, 00:18:20.248 "base_bdevs_list": [ 00:18:20.248 { 00:18:20.248 "name": "pt1", 00:18:20.248 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.248 "is_configured": true, 00:18:20.248 "data_offset": 256, 00:18:20.248 "data_size": 7936 00:18:20.248 }, 00:18:20.248 { 00:18:20.248 "name": "pt2", 00:18:20.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.248 "is_configured": true, 00:18:20.248 "data_offset": 256, 00:18:20.248 "data_size": 7936 00:18:20.248 } 00:18:20.248 ] 00:18:20.248 }' 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.248 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:20.818 [2024-09-29 21:49:39.534175] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:20.818 "name": "raid_bdev1", 00:18:20.818 "aliases": [ 00:18:20.818 "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545" 00:18:20.818 ], 00:18:20.818 "product_name": "Raid Volume", 00:18:20.818 "block_size": 4128, 00:18:20.818 "num_blocks": 7936, 00:18:20.818 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:20.818 "md_size": 32, 00:18:20.818 "md_interleave": true, 00:18:20.818 "dif_type": 0, 00:18:20.818 "assigned_rate_limits": { 00:18:20.818 "rw_ios_per_sec": 0, 00:18:20.818 "rw_mbytes_per_sec": 0, 00:18:20.818 "r_mbytes_per_sec": 0, 00:18:20.818 "w_mbytes_per_sec": 0 00:18:20.818 }, 00:18:20.818 "claimed": false, 00:18:20.818 "zoned": false, 00:18:20.818 "supported_io_types": { 00:18:20.818 "read": true, 00:18:20.818 "write": true, 00:18:20.818 "unmap": false, 00:18:20.818 "flush": false, 00:18:20.818 "reset": true, 00:18:20.818 "nvme_admin": false, 00:18:20.818 "nvme_io": false, 00:18:20.818 "nvme_io_md": false, 00:18:20.818 "write_zeroes": true, 00:18:20.818 "zcopy": false, 00:18:20.818 "get_zone_info": false, 00:18:20.818 "zone_management": false, 00:18:20.818 "zone_append": false, 00:18:20.818 "compare": false, 00:18:20.818 "compare_and_write": false, 00:18:20.818 "abort": false, 00:18:20.818 "seek_hole": false, 00:18:20.818 "seek_data": false, 00:18:20.818 "copy": false, 00:18:20.818 "nvme_iov_md": false 00:18:20.818 }, 00:18:20.818 "memory_domains": [ 00:18:20.818 { 00:18:20.818 "dma_device_id": "system", 00:18:20.818 "dma_device_type": 1 00:18:20.818 }, 00:18:20.818 { 00:18:20.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.818 "dma_device_type": 2 00:18:20.818 }, 00:18:20.818 { 00:18:20.818 "dma_device_id": "system", 00:18:20.818 "dma_device_type": 1 00:18:20.818 }, 00:18:20.818 { 00:18:20.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.818 "dma_device_type": 2 00:18:20.818 } 00:18:20.818 ], 00:18:20.818 "driver_specific": { 00:18:20.818 "raid": { 00:18:20.818 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:20.818 "strip_size_kb": 0, 00:18:20.818 "state": "online", 00:18:20.818 "raid_level": "raid1", 00:18:20.818 "superblock": true, 00:18:20.818 "num_base_bdevs": 2, 00:18:20.818 "num_base_bdevs_discovered": 2, 00:18:20.818 "num_base_bdevs_operational": 2, 00:18:20.818 "base_bdevs_list": [ 00:18:20.818 { 00:18:20.818 "name": "pt1", 00:18:20.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:20.818 "is_configured": true, 00:18:20.818 "data_offset": 256, 00:18:20.818 "data_size": 7936 00:18:20.818 }, 00:18:20.818 { 00:18:20.818 "name": "pt2", 00:18:20.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:20.818 "is_configured": true, 00:18:20.818 "data_offset": 256, 00:18:20.818 "data_size": 7936 00:18:20.818 } 00:18:20.818 ] 00:18:20.818 } 00:18:20.818 } 00:18:20.818 }' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:20.818 pt2' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:20.818 [2024-09-29 21:49:39.733799] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 9c7d7f6f-3f1b-4a4d-b2d0-009b13766545 '!=' 9c7d7f6f-3f1b-4a4d-b2d0-009b13766545 ']' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.818 [2024-09-29 21:49:39.781543] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.818 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.078 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.078 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.078 "name": "raid_bdev1", 00:18:21.078 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:21.078 "strip_size_kb": 0, 00:18:21.078 "state": "online", 00:18:21.078 "raid_level": "raid1", 00:18:21.078 "superblock": true, 00:18:21.078 "num_base_bdevs": 2, 00:18:21.078 "num_base_bdevs_discovered": 1, 00:18:21.078 "num_base_bdevs_operational": 1, 00:18:21.078 "base_bdevs_list": [ 00:18:21.078 { 00:18:21.078 "name": null, 00:18:21.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.078 "is_configured": false, 00:18:21.078 "data_offset": 0, 00:18:21.078 "data_size": 7936 00:18:21.078 }, 00:18:21.078 { 00:18:21.078 "name": "pt2", 00:18:21.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.078 "is_configured": true, 00:18:21.078 "data_offset": 256, 00:18:21.078 "data_size": 7936 00:18:21.078 } 00:18:21.078 ] 00:18:21.078 }' 00:18:21.078 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.078 21:49:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.337 [2024-09-29 21:49:40.152985] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.337 [2024-09-29 21:49:40.153010] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.337 [2024-09-29 21:49:40.153071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.337 [2024-09-29 21:49:40.153109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.337 [2024-09-29 21:49:40.153120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:21.337 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.338 [2024-09-29 21:49:40.228864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.338 [2024-09-29 21:49:40.228913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.338 [2024-09-29 21:49:40.228925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:21.338 [2024-09-29 21:49:40.228935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.338 [2024-09-29 21:49:40.230684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.338 [2024-09-29 21:49:40.230721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.338 [2024-09-29 21:49:40.230760] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:21.338 [2024-09-29 21:49:40.230797] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.338 [2024-09-29 21:49:40.230848] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:21.338 [2024-09-29 21:49:40.230859] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:21.338 [2024-09-29 21:49:40.230938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:21.338 [2024-09-29 21:49:40.230995] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:21.338 [2024-09-29 21:49:40.231002] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:21.338 [2024-09-29 21:49:40.231067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.338 pt2 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.338 "name": "raid_bdev1", 00:18:21.338 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:21.338 "strip_size_kb": 0, 00:18:21.338 "state": "online", 00:18:21.338 "raid_level": "raid1", 00:18:21.338 "superblock": true, 00:18:21.338 "num_base_bdevs": 2, 00:18:21.338 "num_base_bdevs_discovered": 1, 00:18:21.338 "num_base_bdevs_operational": 1, 00:18:21.338 "base_bdevs_list": [ 00:18:21.338 { 00:18:21.338 "name": null, 00:18:21.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.338 "is_configured": false, 00:18:21.338 "data_offset": 256, 00:18:21.338 "data_size": 7936 00:18:21.338 }, 00:18:21.338 { 00:18:21.338 "name": "pt2", 00:18:21.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.338 "is_configured": true, 00:18:21.338 "data_offset": 256, 00:18:21.338 "data_size": 7936 00:18:21.338 } 00:18:21.338 ] 00:18:21.338 }' 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.338 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.907 [2024-09-29 21:49:40.648150] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.907 [2024-09-29 21:49:40.648175] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.907 [2024-09-29 21:49:40.648216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.907 [2024-09-29 21:49:40.648277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.907 [2024-09-29 21:49:40.648286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.907 [2024-09-29 21:49:40.712112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:21.907 [2024-09-29 21:49:40.712149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.907 [2024-09-29 21:49:40.712164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:21.907 [2024-09-29 21:49:40.712172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.907 [2024-09-29 21:49:40.713889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.907 [2024-09-29 21:49:40.713922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:21.907 [2024-09-29 21:49:40.713962] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:21.907 [2024-09-29 21:49:40.713998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:21.907 [2024-09-29 21:49:40.714085] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:21.907 [2024-09-29 21:49:40.714096] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:21.907 [2024-09-29 21:49:40.714112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:21.907 [2024-09-29 21:49:40.714163] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.907 [2024-09-29 21:49:40.714226] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:21.907 [2024-09-29 21:49:40.714234] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:21.907 [2024-09-29 21:49:40.714285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:21.907 [2024-09-29 21:49:40.714335] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:21.907 [2024-09-29 21:49:40.714346] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:21.907 [2024-09-29 21:49:40.714403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.907 pt1 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.907 "name": "raid_bdev1", 00:18:21.907 "uuid": "9c7d7f6f-3f1b-4a4d-b2d0-009b13766545", 00:18:21.907 "strip_size_kb": 0, 00:18:21.907 "state": "online", 00:18:21.907 "raid_level": "raid1", 00:18:21.907 "superblock": true, 00:18:21.907 "num_base_bdevs": 2, 00:18:21.907 "num_base_bdevs_discovered": 1, 00:18:21.907 "num_base_bdevs_operational": 1, 00:18:21.907 "base_bdevs_list": [ 00:18:21.907 { 00:18:21.907 "name": null, 00:18:21.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.907 "is_configured": false, 00:18:21.907 "data_offset": 256, 00:18:21.907 "data_size": 7936 00:18:21.907 }, 00:18:21.907 { 00:18:21.907 "name": "pt2", 00:18:21.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:21.907 "is_configured": true, 00:18:21.907 "data_offset": 256, 00:18:21.907 "data_size": 7936 00:18:21.907 } 00:18:21.907 ] 00:18:21.907 }' 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.907 21:49:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.167 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:22.167 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:22.167 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.167 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.168 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:22.428 [2024-09-29 21:49:41.163480] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 9c7d7f6f-3f1b-4a4d-b2d0-009b13766545 '!=' 9c7d7f6f-3f1b-4a4d-b2d0-009b13766545 ']' 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88750 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88750 ']' 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88750 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88750 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:22.428 killing process with pid 88750 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88750' 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 88750 00:18:22.428 [2024-09-29 21:49:41.245488] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.428 [2024-09-29 21:49:41.245547] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.428 [2024-09-29 21:49:41.245591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.428 [2024-09-29 21:49:41.245608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:22.428 21:49:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 88750 00:18:22.688 [2024-09-29 21:49:41.435409] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.070 21:49:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:24.070 00:18:24.071 real 0m5.982s 00:18:24.071 user 0m8.927s 00:18:24.071 sys 0m1.103s 00:18:24.071 21:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.071 21:49:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.071 ************************************ 00:18:24.071 END TEST raid_superblock_test_md_interleaved 00:18:24.071 ************************************ 00:18:24.071 21:49:42 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:24.071 21:49:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:24.071 21:49:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.071 21:49:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.071 ************************************ 00:18:24.071 START TEST raid_rebuild_test_sb_md_interleaved 00:18:24.071 ************************************ 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89073 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89073 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89073 ']' 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.071 21:49:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.071 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:24.071 Zero copy mechanism will not be used. 00:18:24.071 [2024-09-29 21:49:42.799146] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:24.071 [2024-09-29 21:49:42.799258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89073 ] 00:18:24.071 [2024-09-29 21:49:42.952890] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.331 [2024-09-29 21:49:43.146324] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.591 [2024-09-29 21:49:43.338520] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.591 [2024-09-29 21:49:43.338572] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.856 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 BaseBdev1_malloc 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 [2024-09-29 21:49:43.646480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:24.857 [2024-09-29 21:49:43.646545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.857 [2024-09-29 21:49:43.646568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:24.857 [2024-09-29 21:49:43.646578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.857 [2024-09-29 21:49:43.648229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.857 [2024-09-29 21:49:43.648274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:24.857 BaseBdev1 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 BaseBdev2_malloc 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 [2024-09-29 21:49:43.723248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:24.857 [2024-09-29 21:49:43.723305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.857 [2024-09-29 21:49:43.723323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:24.857 [2024-09-29 21:49:43.723333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.857 [2024-09-29 21:49:43.724979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.857 [2024-09-29 21:49:43.725017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:24.857 BaseBdev2 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 spare_malloc 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 spare_delay 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 [2024-09-29 21:49:43.790605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:24.857 [2024-09-29 21:49:43.790658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:24.857 [2024-09-29 21:49:43.790677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:24.857 [2024-09-29 21:49:43.790687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:24.857 [2024-09-29 21:49:43.792358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:24.857 [2024-09-29 21:49:43.792395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:24.857 spare 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 [2024-09-29 21:49:43.802636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.857 [2024-09-29 21:49:43.804253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.857 [2024-09-29 21:49:43.804421] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:24.857 [2024-09-29 21:49:43.804435] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:24.857 [2024-09-29 21:49:43.804496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:24.857 [2024-09-29 21:49:43.804557] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:24.857 [2024-09-29 21:49:43.804570] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:24.857 [2024-09-29 21:49:43.804631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.857 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.128 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.128 "name": "raid_bdev1", 00:18:25.128 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:25.128 "strip_size_kb": 0, 00:18:25.128 "state": "online", 00:18:25.128 "raid_level": "raid1", 00:18:25.128 "superblock": true, 00:18:25.128 "num_base_bdevs": 2, 00:18:25.128 "num_base_bdevs_discovered": 2, 00:18:25.128 "num_base_bdevs_operational": 2, 00:18:25.128 "base_bdevs_list": [ 00:18:25.128 { 00:18:25.128 "name": "BaseBdev1", 00:18:25.128 "uuid": "af87c61f-7559-55f2-b2b6-d3329f34d396", 00:18:25.128 "is_configured": true, 00:18:25.128 "data_offset": 256, 00:18:25.128 "data_size": 7936 00:18:25.128 }, 00:18:25.128 { 00:18:25.128 "name": "BaseBdev2", 00:18:25.128 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:25.128 "is_configured": true, 00:18:25.128 "data_offset": 256, 00:18:25.128 "data_size": 7936 00:18:25.128 } 00:18:25.128 ] 00:18:25.128 }' 00:18:25.128 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.128 21:49:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.404 [2024-09-29 21:49:44.258125] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.404 [2024-09-29 21:49:44.353693] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.404 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.685 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.685 "name": "raid_bdev1", 00:18:25.685 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:25.685 "strip_size_kb": 0, 00:18:25.685 "state": "online", 00:18:25.685 "raid_level": "raid1", 00:18:25.685 "superblock": true, 00:18:25.685 "num_base_bdevs": 2, 00:18:25.685 "num_base_bdevs_discovered": 1, 00:18:25.685 "num_base_bdevs_operational": 1, 00:18:25.685 "base_bdevs_list": [ 00:18:25.685 { 00:18:25.685 "name": null, 00:18:25.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.685 "is_configured": false, 00:18:25.685 "data_offset": 0, 00:18:25.685 "data_size": 7936 00:18:25.685 }, 00:18:25.685 { 00:18:25.685 "name": "BaseBdev2", 00:18:25.685 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:25.685 "is_configured": true, 00:18:25.685 "data_offset": 256, 00:18:25.685 "data_size": 7936 00:18:25.685 } 00:18:25.685 ] 00:18:25.685 }' 00:18:25.685 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.685 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:25.961 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.961 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.961 [2024-09-29 21:49:44.844855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.961 [2024-09-29 21:49:44.858400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:25.961 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.961 21:49:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:25.961 [2024-09-29 21:49:44.860083] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:26.900 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.900 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.900 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.900 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.900 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.901 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.901 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.901 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.901 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.161 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.161 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.161 "name": "raid_bdev1", 00:18:27.161 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:27.161 "strip_size_kb": 0, 00:18:27.161 "state": "online", 00:18:27.161 "raid_level": "raid1", 00:18:27.161 "superblock": true, 00:18:27.161 "num_base_bdevs": 2, 00:18:27.161 "num_base_bdevs_discovered": 2, 00:18:27.161 "num_base_bdevs_operational": 2, 00:18:27.161 "process": { 00:18:27.161 "type": "rebuild", 00:18:27.161 "target": "spare", 00:18:27.161 "progress": { 00:18:27.161 "blocks": 2560, 00:18:27.161 "percent": 32 00:18:27.161 } 00:18:27.161 }, 00:18:27.161 "base_bdevs_list": [ 00:18:27.161 { 00:18:27.161 "name": "spare", 00:18:27.161 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:27.161 "is_configured": true, 00:18:27.161 "data_offset": 256, 00:18:27.161 "data_size": 7936 00:18:27.161 }, 00:18:27.161 { 00:18:27.161 "name": "BaseBdev2", 00:18:27.161 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:27.161 "is_configured": true, 00:18:27.161 "data_offset": 256, 00:18:27.161 "data_size": 7936 00:18:27.161 } 00:18:27.161 ] 00:18:27.161 }' 00:18:27.161 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.161 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.161 21:49:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.161 [2024-09-29 21:49:46.016718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.161 [2024-09-29 21:49:46.064612] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:27.161 [2024-09-29 21:49:46.064986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.161 [2024-09-29 21:49:46.065024] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:27.161 [2024-09-29 21:49:46.065048] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.161 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.421 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.421 "name": "raid_bdev1", 00:18:27.421 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:27.421 "strip_size_kb": 0, 00:18:27.421 "state": "online", 00:18:27.421 "raid_level": "raid1", 00:18:27.421 "superblock": true, 00:18:27.421 "num_base_bdevs": 2, 00:18:27.421 "num_base_bdevs_discovered": 1, 00:18:27.421 "num_base_bdevs_operational": 1, 00:18:27.421 "base_bdevs_list": [ 00:18:27.421 { 00:18:27.421 "name": null, 00:18:27.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.421 "is_configured": false, 00:18:27.421 "data_offset": 0, 00:18:27.421 "data_size": 7936 00:18:27.421 }, 00:18:27.421 { 00:18:27.421 "name": "BaseBdev2", 00:18:27.421 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:27.421 "is_configured": true, 00:18:27.421 "data_offset": 256, 00:18:27.421 "data_size": 7936 00:18:27.421 } 00:18:27.421 ] 00:18:27.421 }' 00:18:27.421 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.421 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.681 "name": "raid_bdev1", 00:18:27.681 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:27.681 "strip_size_kb": 0, 00:18:27.681 "state": "online", 00:18:27.681 "raid_level": "raid1", 00:18:27.681 "superblock": true, 00:18:27.681 "num_base_bdevs": 2, 00:18:27.681 "num_base_bdevs_discovered": 1, 00:18:27.681 "num_base_bdevs_operational": 1, 00:18:27.681 "base_bdevs_list": [ 00:18:27.681 { 00:18:27.681 "name": null, 00:18:27.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.681 "is_configured": false, 00:18:27.681 "data_offset": 0, 00:18:27.681 "data_size": 7936 00:18:27.681 }, 00:18:27.681 { 00:18:27.681 "name": "BaseBdev2", 00:18:27.681 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:27.681 "is_configured": true, 00:18:27.681 "data_offset": 256, 00:18:27.681 "data_size": 7936 00:18:27.681 } 00:18:27.681 ] 00:18:27.681 }' 00:18:27.681 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.682 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:27.682 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.942 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:27.942 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:27.942 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.942 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.942 [2024-09-29 21:49:46.691808] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.942 [2024-09-29 21:49:46.706857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:27.942 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.942 21:49:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:27.942 [2024-09-29 21:49:46.708474] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.882 "name": "raid_bdev1", 00:18:28.882 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:28.882 "strip_size_kb": 0, 00:18:28.882 "state": "online", 00:18:28.882 "raid_level": "raid1", 00:18:28.882 "superblock": true, 00:18:28.882 "num_base_bdevs": 2, 00:18:28.882 "num_base_bdevs_discovered": 2, 00:18:28.882 "num_base_bdevs_operational": 2, 00:18:28.882 "process": { 00:18:28.882 "type": "rebuild", 00:18:28.882 "target": "spare", 00:18:28.882 "progress": { 00:18:28.882 "blocks": 2560, 00:18:28.882 "percent": 32 00:18:28.882 } 00:18:28.882 }, 00:18:28.882 "base_bdevs_list": [ 00:18:28.882 { 00:18:28.882 "name": "spare", 00:18:28.882 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:28.882 "is_configured": true, 00:18:28.882 "data_offset": 256, 00:18:28.882 "data_size": 7936 00:18:28.882 }, 00:18:28.882 { 00:18:28.882 "name": "BaseBdev2", 00:18:28.882 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:28.882 "is_configured": true, 00:18:28.882 "data_offset": 256, 00:18:28.882 "data_size": 7936 00:18:28.882 } 00:18:28.882 ] 00:18:28.882 }' 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:28.882 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=743 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.882 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.142 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:29.142 "name": "raid_bdev1", 00:18:29.142 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:29.142 "strip_size_kb": 0, 00:18:29.142 "state": "online", 00:18:29.142 "raid_level": "raid1", 00:18:29.142 "superblock": true, 00:18:29.142 "num_base_bdevs": 2, 00:18:29.142 "num_base_bdevs_discovered": 2, 00:18:29.142 "num_base_bdevs_operational": 2, 00:18:29.142 "process": { 00:18:29.142 "type": "rebuild", 00:18:29.142 "target": "spare", 00:18:29.142 "progress": { 00:18:29.142 "blocks": 2816, 00:18:29.142 "percent": 35 00:18:29.142 } 00:18:29.142 }, 00:18:29.142 "base_bdevs_list": [ 00:18:29.142 { 00:18:29.142 "name": "spare", 00:18:29.142 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:29.142 "is_configured": true, 00:18:29.142 "data_offset": 256, 00:18:29.142 "data_size": 7936 00:18:29.142 }, 00:18:29.142 { 00:18:29.142 "name": "BaseBdev2", 00:18:29.142 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:29.142 "is_configured": true, 00:18:29.142 "data_offset": 256, 00:18:29.142 "data_size": 7936 00:18:29.142 } 00:18:29.142 ] 00:18:29.142 }' 00:18:29.142 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:29.142 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.142 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:29.142 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.142 21:49:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.082 21:49:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.082 21:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.082 "name": "raid_bdev1", 00:18:30.082 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:30.082 "strip_size_kb": 0, 00:18:30.082 "state": "online", 00:18:30.082 "raid_level": "raid1", 00:18:30.082 "superblock": true, 00:18:30.082 "num_base_bdevs": 2, 00:18:30.082 "num_base_bdevs_discovered": 2, 00:18:30.082 "num_base_bdevs_operational": 2, 00:18:30.082 "process": { 00:18:30.082 "type": "rebuild", 00:18:30.082 "target": "spare", 00:18:30.082 "progress": { 00:18:30.082 "blocks": 5632, 00:18:30.082 "percent": 70 00:18:30.082 } 00:18:30.082 }, 00:18:30.082 "base_bdevs_list": [ 00:18:30.082 { 00:18:30.082 "name": "spare", 00:18:30.082 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:30.082 "is_configured": true, 00:18:30.082 "data_offset": 256, 00:18:30.082 "data_size": 7936 00:18:30.082 }, 00:18:30.082 { 00:18:30.082 "name": "BaseBdev2", 00:18:30.082 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:30.082 "is_configured": true, 00:18:30.082 "data_offset": 256, 00:18:30.082 "data_size": 7936 00:18:30.082 } 00:18:30.082 ] 00:18:30.082 }' 00:18:30.082 21:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.341 21:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.341 21:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.341 21:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.341 21:49:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:30.911 [2024-09-29 21:49:49.819706] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:30.911 [2024-09-29 21:49:49.819772] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:30.911 [2024-09-29 21:49:49.819861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.171 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.431 "name": "raid_bdev1", 00:18:31.431 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:31.431 "strip_size_kb": 0, 00:18:31.431 "state": "online", 00:18:31.431 "raid_level": "raid1", 00:18:31.431 "superblock": true, 00:18:31.431 "num_base_bdevs": 2, 00:18:31.431 "num_base_bdevs_discovered": 2, 00:18:31.431 "num_base_bdevs_operational": 2, 00:18:31.431 "base_bdevs_list": [ 00:18:31.431 { 00:18:31.431 "name": "spare", 00:18:31.431 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:31.431 "is_configured": true, 00:18:31.431 "data_offset": 256, 00:18:31.431 "data_size": 7936 00:18:31.431 }, 00:18:31.431 { 00:18:31.431 "name": "BaseBdev2", 00:18:31.431 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:31.431 "is_configured": true, 00:18:31.431 "data_offset": 256, 00:18:31.431 "data_size": 7936 00:18:31.431 } 00:18:31.431 ] 00:18:31.431 }' 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.431 "name": "raid_bdev1", 00:18:31.431 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:31.431 "strip_size_kb": 0, 00:18:31.431 "state": "online", 00:18:31.431 "raid_level": "raid1", 00:18:31.431 "superblock": true, 00:18:31.431 "num_base_bdevs": 2, 00:18:31.431 "num_base_bdevs_discovered": 2, 00:18:31.431 "num_base_bdevs_operational": 2, 00:18:31.431 "base_bdevs_list": [ 00:18:31.431 { 00:18:31.431 "name": "spare", 00:18:31.431 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:31.431 "is_configured": true, 00:18:31.431 "data_offset": 256, 00:18:31.431 "data_size": 7936 00:18:31.431 }, 00:18:31.431 { 00:18:31.431 "name": "BaseBdev2", 00:18:31.431 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:31.431 "is_configured": true, 00:18:31.431 "data_offset": 256, 00:18:31.431 "data_size": 7936 00:18:31.431 } 00:18:31.431 ] 00:18:31.431 }' 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.431 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.691 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.691 "name": "raid_bdev1", 00:18:31.691 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:31.691 "strip_size_kb": 0, 00:18:31.691 "state": "online", 00:18:31.691 "raid_level": "raid1", 00:18:31.691 "superblock": true, 00:18:31.691 "num_base_bdevs": 2, 00:18:31.691 "num_base_bdevs_discovered": 2, 00:18:31.691 "num_base_bdevs_operational": 2, 00:18:31.691 "base_bdevs_list": [ 00:18:31.691 { 00:18:31.691 "name": "spare", 00:18:31.691 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:31.691 "is_configured": true, 00:18:31.691 "data_offset": 256, 00:18:31.691 "data_size": 7936 00:18:31.691 }, 00:18:31.691 { 00:18:31.691 "name": "BaseBdev2", 00:18:31.691 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:31.691 "is_configured": true, 00:18:31.691 "data_offset": 256, 00:18:31.691 "data_size": 7936 00:18:31.691 } 00:18:31.691 ] 00:18:31.691 }' 00:18:31.691 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.691 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.951 [2024-09-29 21:49:50.824330] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:31.951 [2024-09-29 21:49:50.824402] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:31.951 [2024-09-29 21:49:50.824492] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:31.951 [2024-09-29 21:49:50.824566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:31.951 [2024-09-29 21:49:50.824626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.951 [2024-09-29 21:49:50.884291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:31.951 [2024-09-29 21:49:50.884339] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.951 [2024-09-29 21:49:50.884359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:31.951 [2024-09-29 21:49:50.884367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.951 [2024-09-29 21:49:50.886223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.951 [2024-09-29 21:49:50.886302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:31.951 [2024-09-29 21:49:50.886355] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:31.951 [2024-09-29 21:49:50.886406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.951 [2024-09-29 21:49:50.886498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.951 spare 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.951 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.212 [2024-09-29 21:49:50.986381] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:32.212 [2024-09-29 21:49:50.986418] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:32.212 [2024-09-29 21:49:50.986499] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:32.212 [2024-09-29 21:49:50.986566] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:32.212 [2024-09-29 21:49:50.986573] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:32.212 [2024-09-29 21:49:50.986640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.212 21:49:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.212 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.212 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.212 "name": "raid_bdev1", 00:18:32.212 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:32.212 "strip_size_kb": 0, 00:18:32.212 "state": "online", 00:18:32.212 "raid_level": "raid1", 00:18:32.212 "superblock": true, 00:18:32.212 "num_base_bdevs": 2, 00:18:32.212 "num_base_bdevs_discovered": 2, 00:18:32.212 "num_base_bdevs_operational": 2, 00:18:32.212 "base_bdevs_list": [ 00:18:32.212 { 00:18:32.212 "name": "spare", 00:18:32.212 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:32.212 "is_configured": true, 00:18:32.212 "data_offset": 256, 00:18:32.212 "data_size": 7936 00:18:32.212 }, 00:18:32.212 { 00:18:32.212 "name": "BaseBdev2", 00:18:32.212 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:32.212 "is_configured": true, 00:18:32.212 "data_offset": 256, 00:18:32.212 "data_size": 7936 00:18:32.212 } 00:18:32.212 ] 00:18:32.212 }' 00:18:32.212 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.212 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.472 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.473 "name": "raid_bdev1", 00:18:32.473 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:32.473 "strip_size_kb": 0, 00:18:32.473 "state": "online", 00:18:32.473 "raid_level": "raid1", 00:18:32.473 "superblock": true, 00:18:32.473 "num_base_bdevs": 2, 00:18:32.473 "num_base_bdevs_discovered": 2, 00:18:32.473 "num_base_bdevs_operational": 2, 00:18:32.473 "base_bdevs_list": [ 00:18:32.473 { 00:18:32.473 "name": "spare", 00:18:32.473 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:32.473 "is_configured": true, 00:18:32.473 "data_offset": 256, 00:18:32.473 "data_size": 7936 00:18:32.473 }, 00:18:32.473 { 00:18:32.473 "name": "BaseBdev2", 00:18:32.473 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:32.473 "is_configured": true, 00:18:32.473 "data_offset": 256, 00:18:32.473 "data_size": 7936 00:18:32.473 } 00:18:32.473 ] 00:18:32.473 }' 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.473 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.733 [2024-09-29 21:49:51.543199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.733 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.734 "name": "raid_bdev1", 00:18:32.734 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:32.734 "strip_size_kb": 0, 00:18:32.734 "state": "online", 00:18:32.734 "raid_level": "raid1", 00:18:32.734 "superblock": true, 00:18:32.734 "num_base_bdevs": 2, 00:18:32.734 "num_base_bdevs_discovered": 1, 00:18:32.734 "num_base_bdevs_operational": 1, 00:18:32.734 "base_bdevs_list": [ 00:18:32.734 { 00:18:32.734 "name": null, 00:18:32.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.734 "is_configured": false, 00:18:32.734 "data_offset": 0, 00:18:32.734 "data_size": 7936 00:18:32.734 }, 00:18:32.734 { 00:18:32.734 "name": "BaseBdev2", 00:18:32.734 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:32.734 "is_configured": true, 00:18:32.734 "data_offset": 256, 00:18:32.734 "data_size": 7936 00:18:32.734 } 00:18:32.734 ] 00:18:32.734 }' 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.734 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.304 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:33.304 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.304 21:49:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.304 [2024-09-29 21:49:51.986443] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.304 [2024-09-29 21:49:51.986610] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:33.304 [2024-09-29 21:49:51.986630] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:33.304 [2024-09-29 21:49:51.986657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.304 [2024-09-29 21:49:52.000319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:33.304 21:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.304 21:49:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:33.304 [2024-09-29 21:49:52.001970] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.244 "name": "raid_bdev1", 00:18:34.244 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:34.244 "strip_size_kb": 0, 00:18:34.244 "state": "online", 00:18:34.244 "raid_level": "raid1", 00:18:34.244 "superblock": true, 00:18:34.244 "num_base_bdevs": 2, 00:18:34.244 "num_base_bdevs_discovered": 2, 00:18:34.244 "num_base_bdevs_operational": 2, 00:18:34.244 "process": { 00:18:34.244 "type": "rebuild", 00:18:34.244 "target": "spare", 00:18:34.244 "progress": { 00:18:34.244 "blocks": 2560, 00:18:34.244 "percent": 32 00:18:34.244 } 00:18:34.244 }, 00:18:34.244 "base_bdevs_list": [ 00:18:34.244 { 00:18:34.244 "name": "spare", 00:18:34.244 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:34.244 "is_configured": true, 00:18:34.244 "data_offset": 256, 00:18:34.244 "data_size": 7936 00:18:34.244 }, 00:18:34.244 { 00:18:34.244 "name": "BaseBdev2", 00:18:34.244 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:34.244 "is_configured": true, 00:18:34.244 "data_offset": 256, 00:18:34.244 "data_size": 7936 00:18:34.244 } 00:18:34.244 ] 00:18:34.244 }' 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.244 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.244 [2024-09-29 21:49:53.161471] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.244 [2024-09-29 21:49:53.206380] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:34.244 [2024-09-29 21:49:53.206434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.244 [2024-09-29 21:49:53.206449] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.244 [2024-09-29 21:49:53.206458] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.508 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.508 "name": "raid_bdev1", 00:18:34.508 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:34.508 "strip_size_kb": 0, 00:18:34.508 "state": "online", 00:18:34.508 "raid_level": "raid1", 00:18:34.508 "superblock": true, 00:18:34.508 "num_base_bdevs": 2, 00:18:34.508 "num_base_bdevs_discovered": 1, 00:18:34.508 "num_base_bdevs_operational": 1, 00:18:34.508 "base_bdevs_list": [ 00:18:34.508 { 00:18:34.508 "name": null, 00:18:34.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.508 "is_configured": false, 00:18:34.508 "data_offset": 0, 00:18:34.508 "data_size": 7936 00:18:34.508 }, 00:18:34.508 { 00:18:34.508 "name": "BaseBdev2", 00:18:34.509 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:34.509 "is_configured": true, 00:18:34.509 "data_offset": 256, 00:18:34.509 "data_size": 7936 00:18:34.509 } 00:18:34.509 ] 00:18:34.509 }' 00:18:34.509 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.509 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.768 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:34.768 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.768 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.768 [2024-09-29 21:49:53.703110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:34.768 [2024-09-29 21:49:53.703207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.768 [2024-09-29 21:49:53.703246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:34.768 [2024-09-29 21:49:53.703278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.768 [2024-09-29 21:49:53.703478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.768 [2024-09-29 21:49:53.703527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:34.768 [2024-09-29 21:49:53.703595] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:34.768 [2024-09-29 21:49:53.703632] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.768 [2024-09-29 21:49:53.703670] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:34.768 [2024-09-29 21:49:53.703747] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.768 [2024-09-29 21:49:53.717420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:34.768 spare 00:18:34.768 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.768 [2024-09-29 21:49:53.719186] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.768 21:49:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.150 "name": "raid_bdev1", 00:18:36.150 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:36.150 "strip_size_kb": 0, 00:18:36.150 "state": "online", 00:18:36.150 "raid_level": "raid1", 00:18:36.150 "superblock": true, 00:18:36.150 "num_base_bdevs": 2, 00:18:36.150 "num_base_bdevs_discovered": 2, 00:18:36.150 "num_base_bdevs_operational": 2, 00:18:36.150 "process": { 00:18:36.150 "type": "rebuild", 00:18:36.150 "target": "spare", 00:18:36.150 "progress": { 00:18:36.150 "blocks": 2560, 00:18:36.150 "percent": 32 00:18:36.150 } 00:18:36.150 }, 00:18:36.150 "base_bdevs_list": [ 00:18:36.150 { 00:18:36.150 "name": "spare", 00:18:36.150 "uuid": "b7601b9e-38e2-5c15-8ff0-3a2071f53d30", 00:18:36.150 "is_configured": true, 00:18:36.150 "data_offset": 256, 00:18:36.150 "data_size": 7936 00:18:36.150 }, 00:18:36.150 { 00:18:36.150 "name": "BaseBdev2", 00:18:36.150 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:36.150 "is_configured": true, 00:18:36.150 "data_offset": 256, 00:18:36.150 "data_size": 7936 00:18:36.150 } 00:18:36.150 ] 00:18:36.150 }' 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.150 [2024-09-29 21:49:54.878720] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.150 [2024-09-29 21:49:54.923645] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:36.150 [2024-09-29 21:49:54.923700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.150 [2024-09-29 21:49:54.923718] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:36.150 [2024-09-29 21:49:54.923726] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.150 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.151 21:49:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.151 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.151 "name": "raid_bdev1", 00:18:36.151 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:36.151 "strip_size_kb": 0, 00:18:36.151 "state": "online", 00:18:36.151 "raid_level": "raid1", 00:18:36.151 "superblock": true, 00:18:36.151 "num_base_bdevs": 2, 00:18:36.151 "num_base_bdevs_discovered": 1, 00:18:36.151 "num_base_bdevs_operational": 1, 00:18:36.151 "base_bdevs_list": [ 00:18:36.151 { 00:18:36.151 "name": null, 00:18:36.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.151 "is_configured": false, 00:18:36.151 "data_offset": 0, 00:18:36.151 "data_size": 7936 00:18:36.151 }, 00:18:36.151 { 00:18:36.151 "name": "BaseBdev2", 00:18:36.151 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:36.151 "is_configured": true, 00:18:36.151 "data_offset": 256, 00:18:36.151 "data_size": 7936 00:18:36.151 } 00:18:36.151 ] 00:18:36.151 }' 00:18:36.151 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.151 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.721 "name": "raid_bdev1", 00:18:36.721 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:36.721 "strip_size_kb": 0, 00:18:36.721 "state": "online", 00:18:36.721 "raid_level": "raid1", 00:18:36.721 "superblock": true, 00:18:36.721 "num_base_bdevs": 2, 00:18:36.721 "num_base_bdevs_discovered": 1, 00:18:36.721 "num_base_bdevs_operational": 1, 00:18:36.721 "base_bdevs_list": [ 00:18:36.721 { 00:18:36.721 "name": null, 00:18:36.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.721 "is_configured": false, 00:18:36.721 "data_offset": 0, 00:18:36.721 "data_size": 7936 00:18:36.721 }, 00:18:36.721 { 00:18:36.721 "name": "BaseBdev2", 00:18:36.721 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:36.721 "is_configured": true, 00:18:36.721 "data_offset": 256, 00:18:36.721 "data_size": 7936 00:18:36.721 } 00:18:36.721 ] 00:18:36.721 }' 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.721 [2024-09-29 21:49:55.562348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:36.721 [2024-09-29 21:49:55.562400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.721 [2024-09-29 21:49:55.562422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:36.721 [2024-09-29 21:49:55.562431] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.721 [2024-09-29 21:49:55.562581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.721 [2024-09-29 21:49:55.562592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:36.721 [2024-09-29 21:49:55.562635] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:36.721 [2024-09-29 21:49:55.562647] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:36.721 [2024-09-29 21:49:55.562656] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:36.721 [2024-09-29 21:49:55.562666] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:36.721 BaseBdev1 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.721 21:49:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.660 "name": "raid_bdev1", 00:18:37.660 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:37.660 "strip_size_kb": 0, 00:18:37.660 "state": "online", 00:18:37.660 "raid_level": "raid1", 00:18:37.660 "superblock": true, 00:18:37.660 "num_base_bdevs": 2, 00:18:37.660 "num_base_bdevs_discovered": 1, 00:18:37.660 "num_base_bdevs_operational": 1, 00:18:37.660 "base_bdevs_list": [ 00:18:37.660 { 00:18:37.660 "name": null, 00:18:37.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.660 "is_configured": false, 00:18:37.660 "data_offset": 0, 00:18:37.660 "data_size": 7936 00:18:37.660 }, 00:18:37.660 { 00:18:37.660 "name": "BaseBdev2", 00:18:37.660 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:37.660 "is_configured": true, 00:18:37.660 "data_offset": 256, 00:18:37.660 "data_size": 7936 00:18:37.660 } 00:18:37.660 ] 00:18:37.660 }' 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.660 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.231 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.231 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.231 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.231 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.231 21:49:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.231 "name": "raid_bdev1", 00:18:38.231 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:38.231 "strip_size_kb": 0, 00:18:38.231 "state": "online", 00:18:38.231 "raid_level": "raid1", 00:18:38.231 "superblock": true, 00:18:38.231 "num_base_bdevs": 2, 00:18:38.231 "num_base_bdevs_discovered": 1, 00:18:38.231 "num_base_bdevs_operational": 1, 00:18:38.231 "base_bdevs_list": [ 00:18:38.231 { 00:18:38.231 "name": null, 00:18:38.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.231 "is_configured": false, 00:18:38.231 "data_offset": 0, 00:18:38.231 "data_size": 7936 00:18:38.231 }, 00:18:38.231 { 00:18:38.231 "name": "BaseBdev2", 00:18:38.231 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:38.231 "is_configured": true, 00:18:38.231 "data_offset": 256, 00:18:38.231 "data_size": 7936 00:18:38.231 } 00:18:38.231 ] 00:18:38.231 }' 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.231 [2024-09-29 21:49:57.139635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:38.231 [2024-09-29 21:49:57.139752] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:38.231 [2024-09-29 21:49:57.139768] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:38.231 request: 00:18:38.231 { 00:18:38.231 "base_bdev": "BaseBdev1", 00:18:38.231 "raid_bdev": "raid_bdev1", 00:18:38.231 "method": "bdev_raid_add_base_bdev", 00:18:38.231 "req_id": 1 00:18:38.231 } 00:18:38.231 Got JSON-RPC error response 00:18:38.231 response: 00:18:38.231 { 00:18:38.231 "code": -22, 00:18:38.231 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:38.231 } 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:38.231 21:49:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:39.170 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.170 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.170 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.170 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.170 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.430 "name": "raid_bdev1", 00:18:39.430 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:39.430 "strip_size_kb": 0, 00:18:39.430 "state": "online", 00:18:39.430 "raid_level": "raid1", 00:18:39.430 "superblock": true, 00:18:39.430 "num_base_bdevs": 2, 00:18:39.430 "num_base_bdevs_discovered": 1, 00:18:39.430 "num_base_bdevs_operational": 1, 00:18:39.430 "base_bdevs_list": [ 00:18:39.430 { 00:18:39.430 "name": null, 00:18:39.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.430 "is_configured": false, 00:18:39.430 "data_offset": 0, 00:18:39.430 "data_size": 7936 00:18:39.430 }, 00:18:39.430 { 00:18:39.430 "name": "BaseBdev2", 00:18:39.430 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:39.430 "is_configured": true, 00:18:39.430 "data_offset": 256, 00:18:39.430 "data_size": 7936 00:18:39.430 } 00:18:39.430 ] 00:18:39.430 }' 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.430 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.690 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.949 "name": "raid_bdev1", 00:18:39.949 "uuid": "acc962f5-3667-4192-952c-99d02efec976", 00:18:39.949 "strip_size_kb": 0, 00:18:39.949 "state": "online", 00:18:39.949 "raid_level": "raid1", 00:18:39.949 "superblock": true, 00:18:39.949 "num_base_bdevs": 2, 00:18:39.949 "num_base_bdevs_discovered": 1, 00:18:39.949 "num_base_bdevs_operational": 1, 00:18:39.949 "base_bdevs_list": [ 00:18:39.949 { 00:18:39.949 "name": null, 00:18:39.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.949 "is_configured": false, 00:18:39.949 "data_offset": 0, 00:18:39.949 "data_size": 7936 00:18:39.949 }, 00:18:39.949 { 00:18:39.949 "name": "BaseBdev2", 00:18:39.949 "uuid": "8d87d8c2-bd5e-5877-b6fe-f03b703a22c5", 00:18:39.949 "is_configured": true, 00:18:39.949 "data_offset": 256, 00:18:39.949 "data_size": 7936 00:18:39.949 } 00:18:39.949 ] 00:18:39.949 }' 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89073 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89073 ']' 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89073 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89073 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:39.949 killing process with pid 89073 00:18:39.949 Received shutdown signal, test time was about 60.000000 seconds 00:18:39.949 00:18:39.949 Latency(us) 00:18:39.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.949 =================================================================================================================== 00:18:39.949 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89073' 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89073 00:18:39.949 [2024-09-29 21:49:58.835858] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:39.949 [2024-09-29 21:49:58.835959] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.949 [2024-09-29 21:49:58.835998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.949 [2024-09-29 21:49:58.836009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:39.949 21:49:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89073 00:18:40.216 [2024-09-29 21:49:59.115357] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:41.599 21:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:41.599 00:18:41.599 real 0m17.580s 00:18:41.599 user 0m23.112s 00:18:41.599 sys 0m1.671s 00:18:41.599 ************************************ 00:18:41.599 END TEST raid_rebuild_test_sb_md_interleaved 00:18:41.599 ************************************ 00:18:41.599 21:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:41.599 21:50:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.599 21:50:00 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:41.599 21:50:00 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:41.599 21:50:00 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89073 ']' 00:18:41.599 21:50:00 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89073 00:18:41.599 21:50:00 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:41.599 00:18:41.599 real 12m5.582s 00:18:41.599 user 16m3.686s 00:18:41.599 sys 2m2.431s 00:18:41.599 21:50:00 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:41.599 21:50:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.599 ************************************ 00:18:41.599 END TEST bdev_raid 00:18:41.599 ************************************ 00:18:41.599 21:50:00 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:41.599 21:50:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:41.599 21:50:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:41.599 21:50:00 -- common/autotest_common.sh@10 -- # set +x 00:18:41.599 ************************************ 00:18:41.599 START TEST spdkcli_raid 00:18:41.599 ************************************ 00:18:41.599 21:50:00 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:41.599 * Looking for test storage... 00:18:41.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:41.599 21:50:00 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:41.860 21:50:00 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:41.860 21:50:00 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:18:41.860 21:50:00 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.860 21:50:00 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:41.860 21:50:00 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.860 21:50:00 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:41.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.860 --rc genhtml_branch_coverage=1 00:18:41.860 --rc genhtml_function_coverage=1 00:18:41.860 --rc genhtml_legend=1 00:18:41.860 --rc geninfo_all_blocks=1 00:18:41.860 --rc geninfo_unexecuted_blocks=1 00:18:41.860 00:18:41.860 ' 00:18:41.860 21:50:00 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:41.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.860 --rc genhtml_branch_coverage=1 00:18:41.860 --rc genhtml_function_coverage=1 00:18:41.860 --rc genhtml_legend=1 00:18:41.860 --rc geninfo_all_blocks=1 00:18:41.860 --rc geninfo_unexecuted_blocks=1 00:18:41.860 00:18:41.860 ' 00:18:41.860 21:50:00 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:41.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.860 --rc genhtml_branch_coverage=1 00:18:41.860 --rc genhtml_function_coverage=1 00:18:41.860 --rc genhtml_legend=1 00:18:41.860 --rc geninfo_all_blocks=1 00:18:41.860 --rc geninfo_unexecuted_blocks=1 00:18:41.860 00:18:41.860 ' 00:18:41.860 21:50:00 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:41.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.860 --rc genhtml_branch_coverage=1 00:18:41.860 --rc genhtml_function_coverage=1 00:18:41.860 --rc genhtml_legend=1 00:18:41.860 --rc geninfo_all_blocks=1 00:18:41.860 --rc geninfo_unexecuted_blocks=1 00:18:41.860 00:18:41.860 ' 00:18:41.860 21:50:00 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:41.860 21:50:00 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:41.860 21:50:00 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:41.860 21:50:00 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:41.860 21:50:00 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:41.860 21:50:00 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:41.860 21:50:00 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:41.860 21:50:00 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:41.861 21:50:00 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.861 21:50:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89755 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:41.861 21:50:00 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89755 00:18:41.861 21:50:00 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 89755 ']' 00:18:41.861 21:50:00 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.861 21:50:00 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.861 21:50:00 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.861 21:50:00 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.861 21:50:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:41.861 [2024-09-29 21:50:00.823614] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:41.861 [2024-09-29 21:50:00.823811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89755 ] 00:18:42.121 [2024-09-29 21:50:00.992951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:42.380 [2024-09-29 21:50:01.190952] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.380 [2024-09-29 21:50:01.190953] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.319 21:50:01 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:43.319 21:50:01 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:18:43.319 21:50:01 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:43.319 21:50:01 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:43.319 21:50:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.319 21:50:02 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:43.319 21:50:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.319 21:50:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:43.319 21:50:02 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:43.319 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:43.319 ' 00:18:44.699 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:44.699 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:44.699 21:50:03 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:44.699 21:50:03 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.699 21:50:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.958 21:50:03 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:44.958 21:50:03 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:44.958 21:50:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.958 21:50:03 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:44.958 ' 00:18:45.897 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:46.157 21:50:04 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:46.157 21:50:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.157 21:50:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.157 21:50:04 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:46.157 21:50:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:46.157 21:50:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.157 21:50:04 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:46.157 21:50:04 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:46.727 21:50:05 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:46.727 21:50:05 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:46.727 21:50:05 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:46.727 21:50:05 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.727 21:50:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.727 21:50:05 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:46.727 21:50:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:46.727 21:50:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:46.727 21:50:05 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:46.727 ' 00:18:47.664 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:47.664 21:50:06 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:47.664 21:50:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.664 21:50:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.664 21:50:06 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:47.664 21:50:06 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.665 21:50:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:47.924 21:50:06 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:47.924 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:47.924 ' 00:18:49.305 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:49.305 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:49.305 21:50:08 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.305 21:50:08 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89755 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89755 ']' 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89755 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89755 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89755' 00:18:49.305 killing process with pid 89755 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 89755 00:18:49.305 21:50:08 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 89755 00:18:51.843 21:50:10 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:51.843 21:50:10 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89755 ']' 00:18:51.843 21:50:10 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89755 00:18:51.843 21:50:10 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 89755 ']' 00:18:51.843 21:50:10 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 89755 00:18:51.843 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89755) - No such process 00:18:51.843 21:50:10 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 89755 is not found' 00:18:51.843 Process with pid 89755 is not found 00:18:51.843 21:50:10 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:51.843 21:50:10 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:51.843 21:50:10 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:51.843 21:50:10 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:51.843 00:18:51.843 real 0m10.153s 00:18:51.843 user 0m20.586s 00:18:51.843 sys 0m1.167s 00:18:51.843 21:50:10 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:51.843 21:50:10 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.843 ************************************ 00:18:51.843 END TEST spdkcli_raid 00:18:51.843 ************************************ 00:18:51.843 21:50:10 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:51.843 21:50:10 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:51.843 21:50:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.843 21:50:10 -- common/autotest_common.sh@10 -- # set +x 00:18:51.843 ************************************ 00:18:51.843 START TEST blockdev_raid5f 00:18:51.843 ************************************ 00:18:51.843 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:51.843 * Looking for test storage... 00:18:51.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:51.843 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:51.843 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:18:51.843 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:52.104 21:50:10 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:52.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.104 --rc genhtml_branch_coverage=1 00:18:52.104 --rc genhtml_function_coverage=1 00:18:52.104 --rc genhtml_legend=1 00:18:52.104 --rc geninfo_all_blocks=1 00:18:52.104 --rc geninfo_unexecuted_blocks=1 00:18:52.104 00:18:52.104 ' 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:52.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.104 --rc genhtml_branch_coverage=1 00:18:52.104 --rc genhtml_function_coverage=1 00:18:52.104 --rc genhtml_legend=1 00:18:52.104 --rc geninfo_all_blocks=1 00:18:52.104 --rc geninfo_unexecuted_blocks=1 00:18:52.104 00:18:52.104 ' 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:52.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.104 --rc genhtml_branch_coverage=1 00:18:52.104 --rc genhtml_function_coverage=1 00:18:52.104 --rc genhtml_legend=1 00:18:52.104 --rc geninfo_all_blocks=1 00:18:52.104 --rc geninfo_unexecuted_blocks=1 00:18:52.104 00:18:52.104 ' 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:52.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:52.104 --rc genhtml_branch_coverage=1 00:18:52.104 --rc genhtml_function_coverage=1 00:18:52.104 --rc genhtml_legend=1 00:18:52.104 --rc geninfo_all_blocks=1 00:18:52.104 --rc geninfo_unexecuted_blocks=1 00:18:52.104 00:18:52.104 ' 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90035 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:52.104 21:50:10 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90035 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90035 ']' 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.104 21:50:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:52.105 [2024-09-29 21:50:11.025201] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:52.105 [2024-09-29 21:50:11.025364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90035 ] 00:18:52.364 [2024-09-29 21:50:11.186472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.624 [2024-09-29 21:50:11.381383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.565 Malloc0 00:18:53.565 Malloc1 00:18:53.565 Malloc2 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bbeb2909-b39e-4eb8-b8fc-be2241e5b656"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bbeb2909-b39e-4eb8-b8fc-be2241e5b656",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bbeb2909-b39e-4eb8-b8fc-be2241e5b656",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "28c00626-13c8-4137-8b8f-fe36ce0a9a21",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ed963319-6ec1-4967-8ac3-b65cf991c5a3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a7c809fe-abde-44f7-8f6f-9b4c931b5be9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:53.565 21:50:12 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90035 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90035 ']' 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90035 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90035 00:18:53.565 killing process with pid 90035 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90035' 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90035 00:18:53.565 21:50:12 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90035 00:18:56.861 21:50:15 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:56.861 21:50:15 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:56.861 21:50:15 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:56.861 21:50:15 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:56.861 21:50:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:56.861 ************************************ 00:18:56.861 START TEST bdev_hello_world 00:18:56.861 ************************************ 00:18:56.861 21:50:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:56.861 [2024-09-29 21:50:15.261875] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:56.861 [2024-09-29 21:50:15.262052] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90101 ] 00:18:56.861 [2024-09-29 21:50:15.426754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.861 [2024-09-29 21:50:15.626437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.431 [2024-09-29 21:50:16.139935] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:57.431 [2024-09-29 21:50:16.140075] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:57.431 [2024-09-29 21:50:16.140108] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:57.431 [2024-09-29 21:50:16.140588] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:57.431 [2024-09-29 21:50:16.140753] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:57.431 [2024-09-29 21:50:16.140798] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:57.431 [2024-09-29 21:50:16.140858] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:57.431 00:18:57.431 [2024-09-29 21:50:16.140899] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:58.823 00:18:58.823 real 0m2.405s 00:18:58.823 user 0m2.030s 00:18:58.823 sys 0m0.254s 00:18:58.823 21:50:17 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:58.823 ************************************ 00:18:58.823 END TEST bdev_hello_world 00:18:58.823 ************************************ 00:18:58.823 21:50:17 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:58.823 21:50:17 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:58.823 21:50:17 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:58.823 21:50:17 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.823 21:50:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.823 ************************************ 00:18:58.823 START TEST bdev_bounds 00:18:58.823 ************************************ 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90145 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90145' 00:18:58.823 Process bdevio pid: 90145 00:18:58.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90145 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90145 ']' 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.823 21:50:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.824 21:50:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.824 21:50:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:58.824 [2024-09-29 21:50:17.744839] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:58.824 [2024-09-29 21:50:17.744947] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90145 ] 00:18:59.086 [2024-09-29 21:50:17.908379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:59.364 [2024-09-29 21:50:18.104811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.364 [2024-09-29 21:50:18.104961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.364 [2024-09-29 21:50:18.105004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.978 21:50:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.978 21:50:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:18:59.978 21:50:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:59.978 I/O targets: 00:18:59.978 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:59.978 00:18:59.978 00:18:59.978 CUnit - A unit testing framework for C - Version 2.1-3 00:18:59.978 http://cunit.sourceforge.net/ 00:18:59.978 00:18:59.978 00:18:59.978 Suite: bdevio tests on: raid5f 00:18:59.978 Test: blockdev write read block ...passed 00:18:59.978 Test: blockdev write zeroes read block ...passed 00:18:59.978 Test: blockdev write zeroes read no split ...passed 00:18:59.978 Test: blockdev write zeroes read split ...passed 00:18:59.978 Test: blockdev write zeroes read split partial ...passed 00:18:59.978 Test: blockdev reset ...passed 00:18:59.978 Test: blockdev write read 8 blocks ...passed 00:18:59.978 Test: blockdev write read size > 128k ...passed 00:18:59.978 Test: blockdev write read invalid size ...passed 00:18:59.978 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:59.978 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:59.978 Test: blockdev write read max offset ...passed 00:18:59.978 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.978 Test: blockdev writev readv 8 blocks ...passed 00:18:59.978 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.978 Test: blockdev writev readv block ...passed 00:18:59.978 Test: blockdev writev readv size > 128k ...passed 00:18:59.978 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.978 Test: blockdev comparev and writev ...passed 00:18:59.978 Test: blockdev nvme passthru rw ...passed 00:18:59.978 Test: blockdev nvme passthru vendor specific ...passed 00:18:59.978 Test: blockdev nvme admin passthru ...passed 00:18:59.978 Test: blockdev copy ...passed 00:18:59.978 00:18:59.978 Run Summary: Type Total Ran Passed Failed Inactive 00:18:59.978 suites 1 1 n/a 0 0 00:18:59.978 tests 23 23 23 0 0 00:18:59.978 asserts 130 130 130 0 n/a 00:18:59.978 00:18:59.978 Elapsed time = 0.555 seconds 00:18:59.978 0 00:19:00.238 21:50:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90145 00:19:00.238 21:50:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90145 ']' 00:19:00.238 21:50:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90145 00:19:00.238 21:50:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:00.238 21:50:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.238 21:50:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90145 00:19:00.238 21:50:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:00.238 21:50:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:00.238 21:50:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90145' 00:19:00.238 killing process with pid 90145 00:19:00.238 21:50:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90145 00:19:00.238 21:50:19 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90145 00:19:01.617 21:50:20 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:01.617 00:19:01.617 real 0m2.817s 00:19:01.617 user 0m6.584s 00:19:01.617 sys 0m0.387s 00:19:01.617 21:50:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.617 21:50:20 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:01.617 ************************************ 00:19:01.617 END TEST bdev_bounds 00:19:01.617 ************************************ 00:19:01.617 21:50:20 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:01.617 21:50:20 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:01.617 21:50:20 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.617 21:50:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:01.617 ************************************ 00:19:01.617 START TEST bdev_nbd 00:19:01.617 ************************************ 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90205 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90205 /var/tmp/spdk-nbd.sock 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90205 ']' 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:01.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.617 21:50:20 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:01.877 [2024-09-29 21:50:20.648753] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:01.877 [2024-09-29 21:50:20.648947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.877 [2024-09-29 21:50:20.813904] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.136 [2024-09-29 21:50:21.015590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:02.704 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.964 1+0 records in 00:19:02.964 1+0 records out 00:19:02.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401626 s, 10.2 MB/s 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:02.964 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:03.224 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:03.224 { 00:19:03.224 "nbd_device": "/dev/nbd0", 00:19:03.224 "bdev_name": "raid5f" 00:19:03.224 } 00:19:03.224 ]' 00:19:03.224 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:03.224 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:03.224 { 00:19:03.224 "nbd_device": "/dev/nbd0", 00:19:03.224 "bdev_name": "raid5f" 00:19:03.224 } 00:19:03.224 ]' 00:19:03.224 21:50:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:03.224 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:03.224 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.224 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:03.224 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:03.224 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:03.224 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:03.224 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:03.483 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:03.742 /dev/nbd0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:03.742 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:03.743 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:03.743 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:03.743 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:03.743 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:03.743 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:03.743 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:03.743 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:04.002 1+0 records in 00:19:04.002 1+0 records out 00:19:04.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499621 s, 8.2 MB/s 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:04.002 { 00:19:04.002 "nbd_device": "/dev/nbd0", 00:19:04.002 "bdev_name": "raid5f" 00:19:04.002 } 00:19:04.002 ]' 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:04.002 { 00:19:04.002 "nbd_device": "/dev/nbd0", 00:19:04.002 "bdev_name": "raid5f" 00:19:04.002 } 00:19:04.002 ]' 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:04.002 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:04.262 21:50:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:04.262 256+0 records in 00:19:04.262 256+0 records out 00:19:04.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142277 s, 73.7 MB/s 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:04.262 256+0 records in 00:19:04.262 256+0 records out 00:19:04.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294607 s, 35.6 MB/s 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.262 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.522 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:04.782 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:05.041 malloc_lvol_verify 00:19:05.041 21:50:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:05.041 a8ac99a4-66ce-4e7c-9b88-5d2f2259da35 00:19:05.041 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:05.301 ba53e639-4183-44ef-917d-3f44c1599aa5 00:19:05.301 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:05.560 /dev/nbd0 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:05.560 mke2fs 1.47.0 (5-Feb-2023) 00:19:05.560 Discarding device blocks: 0/4096 done 00:19:05.560 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:05.560 00:19:05.560 Allocating group tables: 0/1 done 00:19:05.560 Writing inode tables: 0/1 done 00:19:05.560 Creating journal (1024 blocks): done 00:19:05.560 Writing superblocks and filesystem accounting information: 0/1 done 00:19:05.560 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.560 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90205 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90205 ']' 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90205 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90205 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:05.820 killing process with pid 90205 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90205' 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90205 00:19:05.820 21:50:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90205 00:19:07.729 21:50:26 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:07.729 00:19:07.729 real 0m5.646s 00:19:07.729 user 0m7.574s 00:19:07.729 sys 0m1.312s 00:19:07.729 21:50:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:07.729 21:50:26 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 ************************************ 00:19:07.729 END TEST bdev_nbd 00:19:07.729 ************************************ 00:19:07.729 21:50:26 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:07.729 21:50:26 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:19:07.729 21:50:26 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:19:07.729 21:50:26 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:19:07.729 21:50:26 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:07.729 21:50:26 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:07.729 21:50:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 ************************************ 00:19:07.729 START TEST bdev_fio 00:19:07.729 ************************************ 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:07.729 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 ************************************ 00:19:07.729 START TEST bdev_fio_rw_verify 00:19:07.729 ************************************ 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:07.729 21:50:26 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:07.729 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:07.729 fio-3.35 00:19:07.729 Starting 1 thread 00:19:19.947 00:19:19.947 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90409: Sun Sep 29 21:50:37 2024 00:19:19.947 read: IOPS=12.6k, BW=49.4MiB/s (51.8MB/s)(494MiB/10001msec) 00:19:19.947 slat (usec): min=16, max=148, avg=18.42, stdev= 1.96 00:19:19.947 clat (usec): min=10, max=330, avg=126.36, stdev=43.62 00:19:19.947 lat (usec): min=29, max=352, avg=144.78, stdev=43.82 00:19:19.947 clat percentiles (usec): 00:19:19.947 | 50.000th=[ 130], 99.000th=[ 208], 99.900th=[ 233], 99.990th=[ 273], 00:19:19.947 | 99.999th=[ 318] 00:19:19.947 write: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(510MiB/9878msec); 0 zone resets 00:19:19.947 slat (usec): min=7, max=174, avg=15.95, stdev= 3.64 00:19:19.947 clat (usec): min=58, max=1033, avg=293.81, stdev=38.81 00:19:19.947 lat (usec): min=72, max=1208, avg=309.76, stdev=39.66 00:19:19.947 clat percentiles (usec): 00:19:19.947 | 50.000th=[ 297], 99.000th=[ 371], 99.900th=[ 537], 99.990th=[ 930], 00:19:19.947 | 99.999th=[ 1012] 00:19:19.947 bw ( KiB/s): min=50512, max=54520, per=98.86%, avg=52239.58, stdev=1259.01, samples=19 00:19:19.947 iops : min=12628, max=13630, avg=13059.89, stdev=314.75, samples=19 00:19:19.947 lat (usec) : 20=0.01%, 50=0.01%, 100=16.74%, 250=39.08%, 500=44.11% 00:19:19.947 lat (usec) : 750=0.04%, 1000=0.02% 00:19:19.947 lat (msec) : 2=0.01% 00:19:19.947 cpu : usr=98.84%, sys=0.46%, ctx=67, majf=0, minf=10297 00:19:19.947 IO depths : 1=7.6%, 2=19.9%, 4=55.2%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.947 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.947 issued rwts: total=126475,130494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.947 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:19.947 00:19:19.947 Run status group 0 (all jobs): 00:19:19.947 READ: bw=49.4MiB/s (51.8MB/s), 49.4MiB/s-49.4MiB/s (51.8MB/s-51.8MB/s), io=494MiB (518MB), run=10001-10001msec 00:19:19.947 WRITE: bw=51.6MiB/s (54.1MB/s), 51.6MiB/s-51.6MiB/s (54.1MB/s-54.1MB/s), io=510MiB (535MB), run=9878-9878msec 00:19:20.210 ----------------------------------------------------- 00:19:20.210 Suppressions used: 00:19:20.210 count bytes template 00:19:20.210 1 7 /usr/src/fio/parse.c 00:19:20.210 99 9504 /usr/src/fio/iolog.c 00:19:20.210 1 8 libtcmalloc_minimal.so 00:19:20.210 1 904 libcrypto.so 00:19:20.210 ----------------------------------------------------- 00:19:20.210 00:19:20.210 00:19:20.210 real 0m12.653s 00:19:20.210 user 0m12.925s 00:19:20.210 sys 0m0.729s 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:20.210 ************************************ 00:19:20.210 END TEST bdev_fio_rw_verify 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:20.210 ************************************ 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bbeb2909-b39e-4eb8-b8fc-be2241e5b656"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bbeb2909-b39e-4eb8-b8fc-be2241e5b656",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bbeb2909-b39e-4eb8-b8fc-be2241e5b656",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "28c00626-13c8-4137-8b8f-fe36ce0a9a21",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ed963319-6ec1-4967-8ac3-b65cf991c5a3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "a7c809fe-abde-44f7-8f6f-9b4c931b5be9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:20.210 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:20.471 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:20.471 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:20.471 /home/vagrant/spdk_repo/spdk 00:19:20.471 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:20.471 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:20.471 21:50:39 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:20.471 00:19:20.471 real 0m12.947s 00:19:20.471 user 0m13.056s 00:19:20.471 sys 0m0.859s 00:19:20.471 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:20.471 21:50:39 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:20.471 ************************************ 00:19:20.471 END TEST bdev_fio 00:19:20.471 ************************************ 00:19:20.471 21:50:39 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:20.471 21:50:39 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:20.471 21:50:39 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:20.471 21:50:39 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:20.471 21:50:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:20.471 ************************************ 00:19:20.471 START TEST bdev_verify 00:19:20.471 ************************************ 00:19:20.471 21:50:39 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:20.471 [2024-09-29 21:50:39.368628] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:20.471 [2024-09-29 21:50:39.368751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90574 ] 00:19:20.731 [2024-09-29 21:50:39.537233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:20.990 [2024-09-29 21:50:39.731013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.990 [2024-09-29 21:50:39.731062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.560 Running I/O for 5 seconds... 00:19:26.698 10949.00 IOPS, 42.77 MiB/s 11007.50 IOPS, 43.00 MiB/s 10959.67 IOPS, 42.81 MiB/s 10960.00 IOPS, 42.81 MiB/s 10960.20 IOPS, 42.81 MiB/s 00:19:26.698 Latency(us) 00:19:26.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.698 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:26.698 Verification LBA range: start 0x0 length 0x2000 00:19:26.698 raid5f : 5.03 4455.09 17.40 0.00 0.00 43275.27 243.26 30907.81 00:19:26.698 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:26.698 Verification LBA range: start 0x2000 length 0x2000 00:19:26.698 raid5f : 5.02 6506.52 25.42 0.00 0.00 29599.81 215.53 22436.78 00:19:26.698 =================================================================================================================== 00:19:26.698 Total : 10961.61 42.82 0.00 0.00 35160.45 215.53 30907.81 00:19:28.077 00:19:28.077 real 0m7.406s 00:19:28.077 user 0m13.512s 00:19:28.077 sys 0m0.283s 00:19:28.077 21:50:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:28.077 21:50:46 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:28.077 ************************************ 00:19:28.077 END TEST bdev_verify 00:19:28.077 ************************************ 00:19:28.077 21:50:46 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:28.077 21:50:46 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:28.077 21:50:46 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:28.077 21:50:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:28.077 ************************************ 00:19:28.077 START TEST bdev_verify_big_io 00:19:28.077 ************************************ 00:19:28.077 21:50:46 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:28.077 [2024-09-29 21:50:46.859410] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:28.077 [2024-09-29 21:50:46.859530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90668 ] 00:19:28.077 [2024-09-29 21:50:47.031653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.337 [2024-09-29 21:50:47.234967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.337 [2024-09-29 21:50:47.235008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.905 Running I/O for 5 seconds... 00:19:34.295 633.00 IOPS, 39.56 MiB/s 761.00 IOPS, 47.56 MiB/s 782.00 IOPS, 48.88 MiB/s 793.25 IOPS, 49.58 MiB/s 812.00 IOPS, 50.75 MiB/s 00:19:34.295 Latency(us) 00:19:34.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.295 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:34.295 Verification LBA range: start 0x0 length 0x200 00:19:34.295 raid5f : 5.36 355.08 22.19 0.00 0.00 8910480.76 443.58 382798.92 00:19:34.295 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:34.295 Verification LBA range: start 0x200 length 0x200 00:19:34.295 raid5f : 5.37 449.32 28.08 0.00 0.00 7119882.43 122.52 311367.55 00:19:34.295 =================================================================================================================== 00:19:34.295 Total : 804.40 50.28 0.00 0.00 7910218.35 122.52 382798.92 00:19:35.676 00:19:35.676 real 0m7.807s 00:19:35.676 user 0m14.277s 00:19:35.676 sys 0m0.280s 00:19:35.676 21:50:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:35.676 21:50:54 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:35.676 ************************************ 00:19:35.676 END TEST bdev_verify_big_io 00:19:35.676 ************************************ 00:19:35.676 21:50:54 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:35.676 21:50:54 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:35.676 21:50:54 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:35.676 21:50:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:35.676 ************************************ 00:19:35.676 START TEST bdev_write_zeroes 00:19:35.676 ************************************ 00:19:35.676 21:50:54 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:35.936 [2024-09-29 21:50:54.742236] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:35.936 [2024-09-29 21:50:54.742344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90772 ] 00:19:35.936 [2024-09-29 21:50:54.905287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.196 [2024-09-29 21:50:55.101426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.766 Running I/O for 1 seconds... 00:19:37.706 30327.00 IOPS, 118.46 MiB/s 00:19:37.706 Latency(us) 00:19:37.706 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.706 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:37.706 raid5f : 1.01 30303.21 118.37 0.00 0.00 4212.26 1201.97 5780.90 00:19:37.706 =================================================================================================================== 00:19:37.706 Total : 30303.21 118.37 0.00 0.00 4212.26 1201.97 5780.90 00:19:39.088 00:19:39.088 real 0m3.404s 00:19:39.088 user 0m3.009s 00:19:39.088 sys 0m0.271s 00:19:39.088 21:50:58 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.088 21:50:58 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:39.088 ************************************ 00:19:39.088 END TEST bdev_write_zeroes 00:19:39.088 ************************************ 00:19:39.349 21:50:58 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:39.349 21:50:58 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:39.349 21:50:58 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.349 21:50:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:39.349 ************************************ 00:19:39.349 START TEST bdev_json_nonenclosed 00:19:39.349 ************************************ 00:19:39.349 21:50:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:39.349 [2024-09-29 21:50:58.229204] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:39.349 [2024-09-29 21:50:58.229328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90825 ] 00:19:39.609 [2024-09-29 21:50:58.399126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.869 [2024-09-29 21:50:58.594517] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.869 [2024-09-29 21:50:58.594624] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:39.869 [2024-09-29 21:50:58.594647] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:39.869 [2024-09-29 21:50:58.594656] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:40.129 00:19:40.129 real 0m0.849s 00:19:40.129 user 0m0.594s 00:19:40.129 sys 0m0.150s 00:19:40.129 21:50:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.129 21:50:58 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:40.129 ************************************ 00:19:40.129 END TEST bdev_json_nonenclosed 00:19:40.129 ************************************ 00:19:40.129 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:40.129 21:50:59 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:40.129 21:50:59 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:40.129 21:50:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:40.129 ************************************ 00:19:40.129 START TEST bdev_json_nonarray 00:19:40.129 ************************************ 00:19:40.129 21:50:59 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:40.391 [2024-09-29 21:50:59.146184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:40.391 [2024-09-29 21:50:59.146313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90856 ] 00:19:40.391 [2024-09-29 21:50:59.309630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.665 [2024-09-29 21:50:59.508236] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.665 [2024-09-29 21:50:59.508345] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:40.665 [2024-09-29 21:50:59.508386] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:40.665 [2024-09-29 21:50:59.508395] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:40.962 00:19:40.962 real 0m0.839s 00:19:40.962 user 0m0.584s 00:19:40.962 sys 0m0.149s 00:19:40.962 21:50:59 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.962 21:50:59 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:40.962 ************************************ 00:19:40.962 END TEST bdev_json_nonarray 00:19:40.962 ************************************ 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:41.234 21:50:59 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:41.234 00:19:41.234 real 0m49.293s 00:19:41.234 user 1m5.767s 00:19:41.234 sys 0m5.070s 00:19:41.234 21:50:59 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:41.234 21:50:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:41.234 ************************************ 00:19:41.234 END TEST blockdev_raid5f 00:19:41.234 ************************************ 00:19:41.234 21:51:00 -- spdk/autotest.sh@194 -- # uname -s 00:19:41.234 21:51:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:41.234 21:51:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:41.234 21:51:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:41.234 21:51:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@256 -- # timing_exit lib 00:19:41.234 21:51:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:41.234 21:51:00 -- common/autotest_common.sh@10 -- # set +x 00:19:41.234 21:51:00 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:41.234 21:51:00 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:19:41.234 21:51:00 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:41.234 21:51:00 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:41.234 21:51:00 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:19:41.234 21:51:00 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:19:41.234 21:51:00 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:19:41.234 21:51:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:41.234 21:51:00 -- common/autotest_common.sh@10 -- # set +x 00:19:41.234 21:51:00 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:19:41.234 21:51:00 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:19:41.234 21:51:00 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:41.234 21:51:00 -- common/autotest_common.sh@10 -- # set +x 00:19:43.774 INFO: APP EXITING 00:19:43.774 INFO: killing all VMs 00:19:43.774 INFO: killing vhost app 00:19:43.774 INFO: EXIT DONE 00:19:44.034 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:44.034 Waiting for block devices as requested 00:19:44.034 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:44.294 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:45.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:45.234 Cleaning 00:19:45.234 Removing: /var/run/dpdk/spdk0/config 00:19:45.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:45.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:45.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:45.234 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:45.234 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:45.234 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:45.234 Removing: /dev/shm/spdk_tgt_trace.pid56830 00:19:45.234 Removing: /var/run/dpdk/spdk0 00:19:45.234 Removing: /var/run/dpdk/spdk_pid56589 00:19:45.234 Removing: /var/run/dpdk/spdk_pid56830 00:19:45.234 Removing: /var/run/dpdk/spdk_pid57070 00:19:45.234 Removing: /var/run/dpdk/spdk_pid57174 00:19:45.234 Removing: /var/run/dpdk/spdk_pid57240 00:19:45.234 Removing: /var/run/dpdk/spdk_pid57369 00:19:45.234 Removing: /var/run/dpdk/spdk_pid57398 00:19:45.234 Removing: /var/run/dpdk/spdk_pid57608 00:19:45.234 Removing: /var/run/dpdk/spdk_pid57727 00:19:45.234 Removing: /var/run/dpdk/spdk_pid57840 00:19:45.234 Removing: /var/run/dpdk/spdk_pid57967 00:19:45.234 Removing: /var/run/dpdk/spdk_pid58081 00:19:45.234 Removing: /var/run/dpdk/spdk_pid58126 00:19:45.234 Removing: /var/run/dpdk/spdk_pid58162 00:19:45.234 Removing: /var/run/dpdk/spdk_pid58244 00:19:45.234 Removing: /var/run/dpdk/spdk_pid58361 00:19:45.234 Removing: /var/run/dpdk/spdk_pid58808 00:19:45.234 Removing: /var/run/dpdk/spdk_pid58889 00:19:45.234 Removing: /var/run/dpdk/spdk_pid58963 00:19:45.234 Removing: /var/run/dpdk/spdk_pid58992 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59152 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59168 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59331 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59353 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59428 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59446 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59515 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59539 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59745 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59790 00:19:45.234 Removing: /var/run/dpdk/spdk_pid59879 00:19:45.234 Removing: /var/run/dpdk/spdk_pid61254 00:19:45.234 Removing: /var/run/dpdk/spdk_pid61466 00:19:45.234 Removing: /var/run/dpdk/spdk_pid61616 00:19:45.234 Removing: /var/run/dpdk/spdk_pid62260 00:19:45.234 Removing: /var/run/dpdk/spdk_pid62472 00:19:45.234 Removing: /var/run/dpdk/spdk_pid62612 00:19:45.234 Removing: /var/run/dpdk/spdk_pid63261 00:19:45.234 Removing: /var/run/dpdk/spdk_pid63591 00:19:45.234 Removing: /var/run/dpdk/spdk_pid63738 00:19:45.495 Removing: /var/run/dpdk/spdk_pid65123 00:19:45.495 Removing: /var/run/dpdk/spdk_pid65377 00:19:45.495 Removing: /var/run/dpdk/spdk_pid65527 00:19:45.495 Removing: /var/run/dpdk/spdk_pid66923 00:19:45.495 Removing: /var/run/dpdk/spdk_pid67181 00:19:45.495 Removing: /var/run/dpdk/spdk_pid67330 00:19:45.495 Removing: /var/run/dpdk/spdk_pid68716 00:19:45.495 Removing: /var/run/dpdk/spdk_pid69167 00:19:45.495 Removing: /var/run/dpdk/spdk_pid69314 00:19:45.495 Removing: /var/run/dpdk/spdk_pid70800 00:19:45.495 Removing: /var/run/dpdk/spdk_pid71067 00:19:45.495 Removing: /var/run/dpdk/spdk_pid71218 00:19:45.495 Removing: /var/run/dpdk/spdk_pid72712 00:19:45.495 Removing: /var/run/dpdk/spdk_pid72971 00:19:45.495 Removing: /var/run/dpdk/spdk_pid73122 00:19:45.495 Removing: /var/run/dpdk/spdk_pid74608 00:19:45.495 Removing: /var/run/dpdk/spdk_pid75095 00:19:45.495 Removing: /var/run/dpdk/spdk_pid75246 00:19:45.495 Removing: /var/run/dpdk/spdk_pid75390 00:19:45.495 Removing: /var/run/dpdk/spdk_pid75808 00:19:45.495 Removing: /var/run/dpdk/spdk_pid76538 00:19:45.495 Removing: /var/run/dpdk/spdk_pid76914 00:19:45.495 Removing: /var/run/dpdk/spdk_pid77608 00:19:45.495 Removing: /var/run/dpdk/spdk_pid78053 00:19:45.495 Removing: /var/run/dpdk/spdk_pid78803 00:19:45.495 Removing: /var/run/dpdk/spdk_pid79212 00:19:45.495 Removing: /var/run/dpdk/spdk_pid81183 00:19:45.495 Removing: /var/run/dpdk/spdk_pid81628 00:19:45.495 Removing: /var/run/dpdk/spdk_pid82057 00:19:45.495 Removing: /var/run/dpdk/spdk_pid84162 00:19:45.495 Removing: /var/run/dpdk/spdk_pid84642 00:19:45.495 Removing: /var/run/dpdk/spdk_pid85164 00:19:45.495 Removing: /var/run/dpdk/spdk_pid86223 00:19:45.495 Removing: /var/run/dpdk/spdk_pid86551 00:19:45.495 Removing: /var/run/dpdk/spdk_pid87489 00:19:45.495 Removing: /var/run/dpdk/spdk_pid87812 00:19:45.495 Removing: /var/run/dpdk/spdk_pid88750 00:19:45.495 Removing: /var/run/dpdk/spdk_pid89073 00:19:45.495 Removing: /var/run/dpdk/spdk_pid89755 00:19:45.495 Removing: /var/run/dpdk/spdk_pid90035 00:19:45.495 Removing: /var/run/dpdk/spdk_pid90101 00:19:45.495 Removing: /var/run/dpdk/spdk_pid90145 00:19:45.495 Removing: /var/run/dpdk/spdk_pid90394 00:19:45.495 Removing: /var/run/dpdk/spdk_pid90574 00:19:45.495 Removing: /var/run/dpdk/spdk_pid90668 00:19:45.495 Removing: /var/run/dpdk/spdk_pid90772 00:19:45.495 Removing: /var/run/dpdk/spdk_pid90825 00:19:45.495 Removing: /var/run/dpdk/spdk_pid90856 00:19:45.495 Clean 00:19:45.755 21:51:04 -- common/autotest_common.sh@1451 -- # return 0 00:19:45.755 21:51:04 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:19:45.755 21:51:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.755 21:51:04 -- common/autotest_common.sh@10 -- # set +x 00:19:45.755 21:51:04 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:19:45.755 21:51:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.755 21:51:04 -- common/autotest_common.sh@10 -- # set +x 00:19:45.755 21:51:04 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:45.755 21:51:04 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:45.755 21:51:04 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:45.755 21:51:04 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:19:45.755 21:51:04 -- spdk/autotest.sh@394 -- # hostname 00:19:45.755 21:51:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:46.016 geninfo: WARNING: invalid characters removed from testname! 00:20:12.577 21:51:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:12.577 21:51:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:13.147 21:51:31 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:15.055 21:51:33 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:16.966 21:51:35 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:18.876 21:51:37 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:20.789 21:51:39 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:20.789 21:51:39 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:20:20.789 21:51:39 -- common/autotest_common.sh@1681 -- $ lcov --version 00:20:20.789 21:51:39 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:20:21.051 21:51:39 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:20:21.051 21:51:39 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:20:21.051 21:51:39 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:20:21.051 21:51:39 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:20:21.051 21:51:39 -- scripts/common.sh@336 -- $ IFS=.-: 00:20:21.051 21:51:39 -- scripts/common.sh@336 -- $ read -ra ver1 00:20:21.051 21:51:39 -- scripts/common.sh@337 -- $ IFS=.-: 00:20:21.051 21:51:39 -- scripts/common.sh@337 -- $ read -ra ver2 00:20:21.051 21:51:39 -- scripts/common.sh@338 -- $ local 'op=<' 00:20:21.051 21:51:39 -- scripts/common.sh@340 -- $ ver1_l=2 00:20:21.051 21:51:39 -- scripts/common.sh@341 -- $ ver2_l=1 00:20:21.051 21:51:39 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:20:21.051 21:51:39 -- scripts/common.sh@344 -- $ case "$op" in 00:20:21.051 21:51:39 -- scripts/common.sh@345 -- $ : 1 00:20:21.051 21:51:39 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:20:21.051 21:51:39 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.051 21:51:39 -- scripts/common.sh@365 -- $ decimal 1 00:20:21.051 21:51:39 -- scripts/common.sh@353 -- $ local d=1 00:20:21.051 21:51:39 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:20:21.051 21:51:39 -- scripts/common.sh@355 -- $ echo 1 00:20:21.051 21:51:39 -- scripts/common.sh@365 -- $ ver1[v]=1 00:20:21.051 21:51:39 -- scripts/common.sh@366 -- $ decimal 2 00:20:21.051 21:51:39 -- scripts/common.sh@353 -- $ local d=2 00:20:21.051 21:51:39 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:20:21.051 21:51:39 -- scripts/common.sh@355 -- $ echo 2 00:20:21.051 21:51:39 -- scripts/common.sh@366 -- $ ver2[v]=2 00:20:21.051 21:51:39 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:20:21.051 21:51:39 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:20:21.051 21:51:39 -- scripts/common.sh@368 -- $ return 0 00:20:21.051 21:51:39 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.051 21:51:39 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:20:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.051 --rc genhtml_branch_coverage=1 00:20:21.051 --rc genhtml_function_coverage=1 00:20:21.051 --rc genhtml_legend=1 00:20:21.051 --rc geninfo_all_blocks=1 00:20:21.051 --rc geninfo_unexecuted_blocks=1 00:20:21.051 00:20:21.051 ' 00:20:21.051 21:51:39 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:20:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.051 --rc genhtml_branch_coverage=1 00:20:21.051 --rc genhtml_function_coverage=1 00:20:21.051 --rc genhtml_legend=1 00:20:21.051 --rc geninfo_all_blocks=1 00:20:21.051 --rc geninfo_unexecuted_blocks=1 00:20:21.051 00:20:21.051 ' 00:20:21.051 21:51:39 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:20:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.051 --rc genhtml_branch_coverage=1 00:20:21.051 --rc genhtml_function_coverage=1 00:20:21.051 --rc genhtml_legend=1 00:20:21.051 --rc geninfo_all_blocks=1 00:20:21.051 --rc geninfo_unexecuted_blocks=1 00:20:21.051 00:20:21.051 ' 00:20:21.051 21:51:39 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:20:21.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.051 --rc genhtml_branch_coverage=1 00:20:21.051 --rc genhtml_function_coverage=1 00:20:21.051 --rc genhtml_legend=1 00:20:21.051 --rc geninfo_all_blocks=1 00:20:21.051 --rc geninfo_unexecuted_blocks=1 00:20:21.051 00:20:21.051 ' 00:20:21.051 21:51:39 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.051 21:51:39 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:21.052 21:51:39 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:21.052 21:51:39 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.052 21:51:39 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.052 21:51:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.052 21:51:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.052 21:51:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.052 21:51:39 -- paths/export.sh@5 -- $ export PATH 00:20:21.052 21:51:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.052 21:51:39 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:21.052 21:51:39 -- common/autobuild_common.sh@479 -- $ date +%s 00:20:21.052 21:51:39 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727646699.XXXXXX 00:20:21.052 21:51:39 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727646699.tIZF5R 00:20:21.052 21:51:39 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:20:21.052 21:51:39 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:20:21.052 21:51:39 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:21.052 21:51:39 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:21.052 21:51:39 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:21.052 21:51:39 -- common/autobuild_common.sh@495 -- $ get_config_params 00:20:21.052 21:51:39 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:20:21.052 21:51:39 -- common/autotest_common.sh@10 -- $ set +x 00:20:21.052 21:51:39 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:20:21.052 21:51:39 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:20:21.052 21:51:39 -- pm/common@17 -- $ local monitor 00:20:21.052 21:51:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:21.052 21:51:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:21.052 21:51:39 -- pm/common@25 -- $ sleep 1 00:20:21.052 21:51:39 -- pm/common@21 -- $ date +%s 00:20:21.052 21:51:39 -- pm/common@21 -- $ date +%s 00:20:21.052 21:51:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727646699 00:20:21.052 21:51:39 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727646699 00:20:21.052 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727646699_collect-cpu-load.pm.log 00:20:21.052 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727646699_collect-vmstat.pm.log 00:20:21.996 21:51:40 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:20:21.996 21:51:40 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:20:21.996 21:51:40 -- spdk/autopackage.sh@14 -- $ timing_finish 00:20:21.996 21:51:40 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:21.996 21:51:40 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:21.996 21:51:40 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:21.996 21:51:40 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:20:21.996 21:51:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:20:21.996 21:51:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:20:21.996 21:51:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:21.996 21:51:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:20:21.996 21:51:40 -- pm/common@44 -- $ pid=92362 00:20:21.996 21:51:40 -- pm/common@50 -- $ kill -TERM 92362 00:20:21.996 21:51:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:21.996 21:51:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:20:21.996 21:51:40 -- pm/common@44 -- $ pid=92364 00:20:21.996 21:51:40 -- pm/common@50 -- $ kill -TERM 92364 00:20:21.996 + [[ -n 5423 ]] 00:20:21.996 + sudo kill 5423 00:20:22.268 [Pipeline] } 00:20:22.284 [Pipeline] // timeout 00:20:22.290 [Pipeline] } 00:20:22.305 [Pipeline] // stage 00:20:22.311 [Pipeline] } 00:20:22.326 [Pipeline] // catchError 00:20:22.335 [Pipeline] stage 00:20:22.338 [Pipeline] { (Stop VM) 00:20:22.351 [Pipeline] sh 00:20:22.637 + vagrant halt 00:20:25.178 ==> default: Halting domain... 00:20:33.323 [Pipeline] sh 00:20:33.606 + vagrant destroy -f 00:20:36.150 ==> default: Removing domain... 00:20:36.162 [Pipeline] sh 00:20:36.445 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:36.455 [Pipeline] } 00:20:36.469 [Pipeline] // stage 00:20:36.473 [Pipeline] } 00:20:36.487 [Pipeline] // dir 00:20:36.492 [Pipeline] } 00:20:36.505 [Pipeline] // wrap 00:20:36.510 [Pipeline] } 00:20:36.522 [Pipeline] // catchError 00:20:36.531 [Pipeline] stage 00:20:36.533 [Pipeline] { (Epilogue) 00:20:36.544 [Pipeline] sh 00:20:36.828 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:41.057 [Pipeline] catchError 00:20:41.059 [Pipeline] { 00:20:41.074 [Pipeline] sh 00:20:41.362 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:41.362 Artifacts sizes are good 00:20:41.372 [Pipeline] } 00:20:41.388 [Pipeline] // catchError 00:20:41.399 [Pipeline] archiveArtifacts 00:20:41.406 Archiving artifacts 00:20:41.519 [Pipeline] cleanWs 00:20:41.528 [WS-CLEANUP] Deleting project workspace... 00:20:41.528 [WS-CLEANUP] Deferred wipeout is used... 00:20:41.534 [WS-CLEANUP] done 00:20:41.535 [Pipeline] } 00:20:41.544 [Pipeline] // stage 00:20:41.548 [Pipeline] } 00:20:41.557 [Pipeline] // node 00:20:41.561 [Pipeline] End of Pipeline 00:20:41.601 Finished: SUCCESS